00:00:00.000 Started by upstream project "autotest-per-patch" build number 132425 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.046 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.046 The recommended git tool is: git 00:00:00.046 using credential 00000000-0000-0000-0000-000000000002 00:00:00.048 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.075 Fetching changes from the remote Git repository 00:00:00.078 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.118 Using shallow fetch with depth 1 00:00:00.118 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.118 > git --version # timeout=10 00:00:00.172 > git --version # 'git version 2.39.2' 00:00:00.172 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.195 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.195 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.855 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.869 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.882 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.882 > git config core.sparsecheckout # timeout=10 00:00:03.894 > git read-tree -mu HEAD # timeout=10 00:00:03.911 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.938 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.939 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.067 [Pipeline] Start of Pipeline 00:00:04.080 [Pipeline] library 00:00:04.082 Loading library shm_lib@master 00:00:04.082 Library shm_lib@master is cached. Copying from home. 00:00:04.099 [Pipeline] node 00:00:04.107 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4 00:00:04.109 [Pipeline] { 00:00:04.121 [Pipeline] catchError 00:00:04.122 [Pipeline] { 00:00:04.136 [Pipeline] wrap 00:00:04.144 [Pipeline] { 00:00:04.153 [Pipeline] stage 00:00:04.155 [Pipeline] { (Prologue) 00:00:04.178 [Pipeline] echo 00:00:04.180 Node: VM-host-SM17 00:00:04.187 [Pipeline] cleanWs 00:00:04.199 [WS-CLEANUP] Deleting project workspace... 00:00:04.199 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.204 [WS-CLEANUP] done 00:00:04.422 [Pipeline] setCustomBuildProperty 00:00:04.506 [Pipeline] httpRequest 00:00:06.788 [Pipeline] echo 00:00:06.790 Sorcerer 10.211.164.101 is alive 00:00:06.800 [Pipeline] retry 00:00:06.801 [Pipeline] { 00:00:06.814 [Pipeline] httpRequest 00:00:06.819 HttpMethod: GET 00:00:06.819 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.821 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.827 Response Code: HTTP/1.1 200 OK 00:00:06.827 Success: Status code 200 is in the accepted range: 200,404 00:00:06.828 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.380 [Pipeline] } 00:00:07.396 [Pipeline] // retry 00:00:07.403 [Pipeline] sh 00:00:07.702 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.717 [Pipeline] httpRequest 00:00:10.166 [Pipeline] echo 00:00:10.168 Sorcerer 10.211.164.101 is alive 00:00:10.180 [Pipeline] retry 00:00:10.182 [Pipeline] { 00:00:10.199 [Pipeline] httpRequest 00:00:10.204 HttpMethod: GET 00:00:10.205 URL: http://10.211.164.101/packages/spdk_2e015e34ff96df01d0b187f7b8e7e8a9485b9b60.tar.gz 00:00:10.205 Sending request to url: http://10.211.164.101/packages/spdk_2e015e34ff96df01d0b187f7b8e7e8a9485b9b60.tar.gz 00:00:10.208 Response Code: HTTP/1.1 200 OK 00:00:10.209 Success: Status code 200 is in the accepted range: 200,404 00:00:10.210 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/spdk_2e015e34ff96df01d0b187f7b8e7e8a9485b9b60.tar.gz 00:00:30.857 [Pipeline] } 00:00:30.906 [Pipeline] // retry 00:00:30.913 [Pipeline] sh 00:00:31.257 + tar --no-same-owner -xf spdk_2e015e34ff96df01d0b187f7b8e7e8a9485b9b60.tar.gz 00:00:34.559 [Pipeline] sh 00:00:34.845 + git -C spdk log --oneline -n5 00:00:34.845 2e015e34f bdevperf: Remove TAILQ_REMOVE which may result in potential memory leak 00:00:34.845 aae11995f bdev/malloc: Fix unexpected DIF verification error for initial read 00:00:34.845 7bc1aace1 dif: Set DIF field to 0 explicitly if its check is disabled 00:00:34.845 ce2cd8dc9 bdev: Insert metadata using bounce/accel buffer if I/O is not aware of metadata 00:00:34.845 2d31d77ac ut/bdev: Remove duplication with many stups among unit test files 00:00:34.867 [Pipeline] writeFile 00:00:34.885 [Pipeline] sh 00:00:35.171 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:35.184 [Pipeline] sh 00:00:35.464 + cat autorun-spdk.conf 00:00:35.464 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.464 SPDK_TEST_NVMF=1 00:00:35.464 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:35.464 SPDK_TEST_URING=1 00:00:35.464 SPDK_TEST_USDT=1 00:00:35.464 SPDK_RUN_UBSAN=1 00:00:35.464 NET_TYPE=virt 00:00:35.464 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:35.470 RUN_NIGHTLY=0 00:00:35.472 [Pipeline] } 00:00:35.488 [Pipeline] // stage 00:00:35.504 [Pipeline] stage 00:00:35.506 [Pipeline] { (Run VM) 00:00:35.520 [Pipeline] sh 00:00:35.820 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:35.820 + echo 'Start stage prepare_nvme.sh' 00:00:35.820 Start stage prepare_nvme.sh 00:00:35.820 + [[ -n 1 ]] 00:00:35.820 + disk_prefix=ex1 00:00:35.820 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4 ]] 00:00:35.820 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/autorun-spdk.conf ]] 00:00:35.820 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/autorun-spdk.conf 00:00:35.820 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.820 ++ SPDK_TEST_NVMF=1 00:00:35.820 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:35.820 ++ SPDK_TEST_URING=1 00:00:35.820 ++ SPDK_TEST_USDT=1 00:00:35.820 ++ SPDK_RUN_UBSAN=1 00:00:35.820 ++ NET_TYPE=virt 00:00:35.820 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:35.820 ++ RUN_NIGHTLY=0 00:00:35.820 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4 00:00:35.820 + nvme_files=() 00:00:35.820 + declare -A nvme_files 00:00:35.820 + backend_dir=/var/lib/libvirt/images/backends 00:00:35.820 + nvme_files['nvme.img']=5G 00:00:35.820 + nvme_files['nvme-cmb.img']=5G 00:00:35.820 + nvme_files['nvme-multi0.img']=4G 00:00:35.820 + nvme_files['nvme-multi1.img']=4G 00:00:35.820 + nvme_files['nvme-multi2.img']=4G 00:00:35.820 + nvme_files['nvme-openstack.img']=8G 00:00:35.820 + nvme_files['nvme-zns.img']=5G 00:00:35.820 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:35.820 + (( SPDK_TEST_FTL == 1 )) 00:00:35.820 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:35.820 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:35.820 + for nvme in "${!nvme_files[@]}" 00:00:35.820 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:35.820 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:35.820 + for nvme in "${!nvme_files[@]}" 00:00:35.820 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:35.820 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:35.820 + for nvme in "${!nvme_files[@]}" 00:00:35.820 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:35.820 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:35.820 + for nvme in "${!nvme_files[@]}" 00:00:35.820 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:35.820 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:35.820 + for nvme in "${!nvme_files[@]}" 00:00:35.820 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:35.820 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:35.820 + for nvme in "${!nvme_files[@]}" 00:00:35.820 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:35.820 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:35.820 + for nvme in "${!nvme_files[@]}" 00:00:35.821 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:36.410 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:36.410 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:36.410 + echo 'End stage prepare_nvme.sh' 00:00:36.410 End stage prepare_nvme.sh 00:00:36.422 [Pipeline] sh 00:00:36.702 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:36.702 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:00:36.702 00:00:36.702 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/spdk/scripts/vagrant 00:00:36.702 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/spdk 00:00:36.702 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4 00:00:36.702 HELP=0 00:00:36.702 DRY_RUN=0 00:00:36.702 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:00:36.702 NVME_DISKS_TYPE=nvme,nvme, 00:00:36.702 NVME_AUTO_CREATE=0 00:00:36.702 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:00:36.702 NVME_CMB=,, 00:00:36.702 NVME_PMR=,, 00:00:36.702 NVME_ZNS=,, 00:00:36.702 NVME_MS=,, 00:00:36.702 NVME_FDP=,, 00:00:36.702 SPDK_VAGRANT_DISTRO=fedora39 00:00:36.702 SPDK_VAGRANT_VMCPU=10 00:00:36.702 SPDK_VAGRANT_VMRAM=12288 00:00:36.702 SPDK_VAGRANT_PROVIDER=libvirt 00:00:36.702 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:36.702 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:36.702 SPDK_OPENSTACK_NETWORK=0 00:00:36.702 VAGRANT_PACKAGE_BOX=0 00:00:36.702 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/spdk/scripts/vagrant/Vagrantfile 00:00:36.702 FORCE_DISTRO=true 00:00:36.702 VAGRANT_BOX_VERSION= 00:00:36.702 EXTRA_VAGRANTFILES= 00:00:36.702 NIC_MODEL=e1000 00:00:36.702 00:00:36.702 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/fedora39-libvirt' 00:00:36.702 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4 00:00:39.987 Bringing machine 'default' up with 'libvirt' provider... 00:00:40.982 ==> default: Creating image (snapshot of base box volume). 00:00:40.982 ==> default: Creating domain with the following settings... 00:00:40.982 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732121159_7f71bf61b2ec9bbe3fcc 00:00:40.982 ==> default: -- Domain type: kvm 00:00:40.982 ==> default: -- Cpus: 10 00:00:40.982 ==> default: -- Feature: acpi 00:00:40.982 ==> default: -- Feature: apic 00:00:40.982 ==> default: -- Feature: pae 00:00:40.982 ==> default: -- Memory: 12288M 00:00:40.982 ==> default: -- Memory Backing: hugepages: 00:00:40.982 ==> default: -- Management MAC: 00:00:40.982 ==> default: -- Loader: 00:00:40.982 ==> default: -- Nvram: 00:00:40.982 ==> default: -- Base box: spdk/fedora39 00:00:40.982 ==> default: -- Storage pool: default 00:00:40.982 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732121159_7f71bf61b2ec9bbe3fcc.img (20G) 00:00:40.982 ==> default: -- Volume Cache: default 00:00:40.982 ==> default: -- Kernel: 00:00:40.982 ==> default: -- Initrd: 00:00:40.982 ==> default: -- Graphics Type: vnc 00:00:40.982 ==> default: -- Graphics Port: -1 00:00:40.982 ==> default: -- Graphics IP: 127.0.0.1 00:00:40.982 ==> default: -- Graphics Password: Not defined 00:00:40.982 ==> default: -- Video Type: cirrus 00:00:40.982 ==> default: -- Video VRAM: 9216 00:00:40.982 ==> default: -- Sound Type: 00:00:40.982 ==> default: -- Keymap: en-us 00:00:40.982 ==> default: -- TPM Path: 00:00:40.982 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:40.982 ==> default: -- Command line args: 00:00:40.982 ==> default: -> value=-device, 00:00:40.982 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:40.982 ==> default: -> value=-drive, 00:00:40.982 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:00:40.982 ==> default: -> value=-device, 00:00:40.982 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.982 ==> default: -> value=-device, 00:00:40.982 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:40.982 ==> default: -> value=-drive, 00:00:40.982 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:40.982 ==> default: -> value=-device, 00:00:40.982 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.982 ==> default: -> value=-drive, 00:00:40.982 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:40.982 ==> default: -> value=-device, 00:00:40.982 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.982 ==> default: -> value=-drive, 00:00:40.982 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:40.982 ==> default: -> value=-device, 00:00:40.982 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:41.242 ==> default: Creating shared folders metadata... 00:00:41.242 ==> default: Starting domain. 00:00:43.143 ==> default: Waiting for domain to get an IP address... 00:00:58.021 ==> default: Waiting for SSH to become available... 00:00:59.396 ==> default: Configuring and enabling network interfaces... 00:01:03.621 default: SSH address: 192.168.121.95:22 00:01:03.621 default: SSH username: vagrant 00:01:03.621 default: SSH auth method: private key 00:01:05.524 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:13.640 ==> default: Mounting SSHFS shared folder... 00:01:14.577 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:14.577 ==> default: Checking Mount.. 00:01:15.953 ==> default: Folder Successfully Mounted! 00:01:15.953 ==> default: Running provisioner: file... 00:01:16.521 default: ~/.gitconfig => .gitconfig 00:01:17.089 00:01:17.089 SUCCESS! 00:01:17.089 00:01:17.089 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/fedora39-libvirt and type "vagrant ssh" to use. 00:01:17.089 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:17.089 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/fedora39-libvirt" to destroy all trace of vm. 00:01:17.089 00:01:17.098 [Pipeline] } 00:01:17.113 [Pipeline] // stage 00:01:17.123 [Pipeline] dir 00:01:17.124 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/fedora39-libvirt 00:01:17.125 [Pipeline] { 00:01:17.167 [Pipeline] catchError 00:01:17.169 [Pipeline] { 00:01:17.182 [Pipeline] sh 00:01:17.462 + vagrant ssh-config --host vagrant 00:01:17.462 + sed -ne /^Host/,$p 00:01:17.462 + tee ssh_conf 00:01:21.717 Host vagrant 00:01:21.717 HostName 192.168.121.95 00:01:21.717 User vagrant 00:01:21.717 Port 22 00:01:21.717 UserKnownHostsFile /dev/null 00:01:21.717 StrictHostKeyChecking no 00:01:21.717 PasswordAuthentication no 00:01:21.717 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:21.717 IdentitiesOnly yes 00:01:21.717 LogLevel FATAL 00:01:21.717 ForwardAgent yes 00:01:21.717 ForwardX11 yes 00:01:21.717 00:01:21.735 [Pipeline] withEnv 00:01:21.738 [Pipeline] { 00:01:21.750 [Pipeline] sh 00:01:22.026 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:22.026 source /etc/os-release 00:01:22.026 [[ -e /image.version ]] && img=$(< /image.version) 00:01:22.026 # Minimal, systemd-like check. 00:01:22.026 if [[ -e /.dockerenv ]]; then 00:01:22.026 # Clear garbage from the node's name: 00:01:22.026 # agt-er_autotest_547-896 -> autotest_547-896 00:01:22.026 # $HOSTNAME is the actual container id 00:01:22.026 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:22.026 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:22.026 # We can assume this is a mount from a host where container is running, 00:01:22.026 # so fetch its hostname to easily identify the target swarm worker. 00:01:22.026 container="$(< /etc/hostname) ($agent)" 00:01:22.026 else 00:01:22.026 # Fallback 00:01:22.026 container=$agent 00:01:22.026 fi 00:01:22.026 fi 00:01:22.026 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:22.026 00:01:22.295 [Pipeline] } 00:01:22.315 [Pipeline] // withEnv 00:01:22.322 [Pipeline] setCustomBuildProperty 00:01:22.337 [Pipeline] stage 00:01:22.340 [Pipeline] { (Tests) 00:01:22.360 [Pipeline] sh 00:01:22.640 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:22.912 [Pipeline] sh 00:01:23.190 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:23.464 [Pipeline] timeout 00:01:23.465 Timeout set to expire in 1 hr 0 min 00:01:23.467 [Pipeline] { 00:01:23.483 [Pipeline] sh 00:01:23.761 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:24.328 HEAD is now at 2e015e34f bdevperf: Remove TAILQ_REMOVE which may result in potential memory leak 00:01:24.341 [Pipeline] sh 00:01:24.618 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:24.890 [Pipeline] sh 00:01:25.168 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:25.441 [Pipeline] sh 00:01:25.723 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:25.723 ++ readlink -f spdk_repo 00:01:25.723 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:25.723 + [[ -n /home/vagrant/spdk_repo ]] 00:01:25.723 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:25.723 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:25.723 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:25.723 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:25.723 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:25.723 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:25.723 + cd /home/vagrant/spdk_repo 00:01:25.723 + source /etc/os-release 00:01:25.724 ++ NAME='Fedora Linux' 00:01:25.724 ++ VERSION='39 (Cloud Edition)' 00:01:25.724 ++ ID=fedora 00:01:25.724 ++ VERSION_ID=39 00:01:25.724 ++ VERSION_CODENAME= 00:01:25.724 ++ PLATFORM_ID=platform:f39 00:01:25.724 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:25.724 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:25.724 ++ LOGO=fedora-logo-icon 00:01:25.724 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:25.724 ++ HOME_URL=https://fedoraproject.org/ 00:01:25.724 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:25.724 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:25.724 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:25.724 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:25.724 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:25.724 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:25.724 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:25.724 ++ SUPPORT_END=2024-11-12 00:01:25.724 ++ VARIANT='Cloud Edition' 00:01:25.724 ++ VARIANT_ID=cloud 00:01:25.724 + uname -a 00:01:25.724 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:25.724 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:26.302 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:26.302 Hugepages 00:01:26.302 node hugesize free / total 00:01:26.302 node0 1048576kB 0 / 0 00:01:26.302 node0 2048kB 0 / 0 00:01:26.302 00:01:26.302 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:26.302 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:26.302 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:26.302 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:26.302 + rm -f /tmp/spdk-ld-path 00:01:26.302 + source autorun-spdk.conf 00:01:26.302 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.302 ++ SPDK_TEST_NVMF=1 00:01:26.302 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.302 ++ SPDK_TEST_URING=1 00:01:26.302 ++ SPDK_TEST_USDT=1 00:01:26.302 ++ SPDK_RUN_UBSAN=1 00:01:26.302 ++ NET_TYPE=virt 00:01:26.302 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:26.302 ++ RUN_NIGHTLY=0 00:01:26.302 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:26.302 + [[ -n '' ]] 00:01:26.302 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:26.302 + for M in /var/spdk/build-*-manifest.txt 00:01:26.302 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:26.302 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:26.302 + for M in /var/spdk/build-*-manifest.txt 00:01:26.302 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:26.302 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:26.302 + for M in /var/spdk/build-*-manifest.txt 00:01:26.302 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:26.302 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:26.302 ++ uname 00:01:26.560 + [[ Linux == \L\i\n\u\x ]] 00:01:26.560 + sudo dmesg -T 00:01:26.560 + sudo dmesg --clear 00:01:26.560 + dmesg_pid=5206 00:01:26.560 + [[ Fedora Linux == FreeBSD ]] 00:01:26.560 + sudo dmesg -Tw 00:01:26.560 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:26.560 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:26.560 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:26.560 + [[ -x /usr/src/fio-static/fio ]] 00:01:26.560 + export FIO_BIN=/usr/src/fio-static/fio 00:01:26.560 + FIO_BIN=/usr/src/fio-static/fio 00:01:26.560 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:26.560 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:26.560 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:26.560 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:26.560 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:26.560 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:26.560 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:26.560 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:26.560 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:26.560 16:46:44 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:26.560 16:46:44 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:26.560 16:46:44 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.560 16:46:44 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:26.561 16:46:44 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.561 16:46:44 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:01:26.561 16:46:44 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:01:26.561 16:46:44 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:26.561 16:46:44 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:01:26.561 16:46:44 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:26.561 16:46:44 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:26.561 16:46:44 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:26.561 16:46:44 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:26.561 16:46:45 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:26.561 16:46:45 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:26.561 16:46:45 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:26.561 16:46:45 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:26.561 16:46:45 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:26.561 16:46:45 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:26.561 16:46:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.561 16:46:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.561 16:46:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.561 16:46:45 -- paths/export.sh@5 -- $ export PATH 00:01:26.561 16:46:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.561 16:46:45 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:26.561 16:46:45 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:26.561 16:46:45 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732121205.XXXXXX 00:01:26.561 16:46:45 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732121205.zSmpr4 00:01:26.561 16:46:45 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:26.561 16:46:45 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:26.561 16:46:45 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:26.561 16:46:45 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:26.561 16:46:45 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:26.561 16:46:45 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:26.561 16:46:45 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:26.561 16:46:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.561 16:46:45 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:26.561 16:46:45 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:26.561 16:46:45 -- pm/common@17 -- $ local monitor 00:01:26.561 16:46:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:26.561 16:46:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:26.561 16:46:45 -- pm/common@21 -- $ date +%s 00:01:26.561 16:46:45 -- pm/common@25 -- $ sleep 1 00:01:26.561 16:46:45 -- pm/common@21 -- $ date +%s 00:01:26.561 16:46:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732121205 00:01:26.561 16:46:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732121205 00:01:26.819 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732121205_collect-cpu-load.pm.log 00:01:26.819 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732121205_collect-vmstat.pm.log 00:01:27.753 16:46:46 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:27.753 16:46:46 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:27.753 16:46:46 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:27.753 16:46:46 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:27.753 16:46:46 -- spdk/autobuild.sh@16 -- $ date -u 00:01:27.753 Wed Nov 20 04:46:46 PM UTC 2024 00:01:27.753 16:46:46 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:27.753 v25.01-pre-235-g2e015e34f 00:01:27.753 16:46:46 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:27.753 16:46:46 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:27.753 16:46:46 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:27.753 16:46:46 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:27.753 16:46:46 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:27.753 16:46:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.753 ************************************ 00:01:27.753 START TEST ubsan 00:01:27.753 ************************************ 00:01:27.753 using ubsan 00:01:27.753 16:46:46 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:27.753 00:01:27.753 real 0m0.000s 00:01:27.753 user 0m0.000s 00:01:27.753 sys 0m0.000s 00:01:27.753 16:46:46 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:27.753 ************************************ 00:01:27.753 END TEST ubsan 00:01:27.753 16:46:46 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:27.753 ************************************ 00:01:27.753 16:46:46 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:27.753 16:46:46 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:27.753 16:46:46 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:27.753 16:46:46 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:27.753 16:46:46 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:27.753 16:46:46 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:27.753 16:46:46 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:27.753 16:46:46 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:27.753 16:46:46 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:27.753 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:27.753 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:28.320 Using 'verbs' RDMA provider 00:01:41.458 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:56.371 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:56.371 Creating mk/config.mk...done. 00:01:56.371 Creating mk/cc.flags.mk...done. 00:01:56.371 Type 'make' to build. 00:01:56.371 16:47:13 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:56.371 16:47:13 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:56.371 16:47:13 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:56.371 16:47:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.371 ************************************ 00:01:56.371 START TEST make 00:01:56.371 ************************************ 00:01:56.371 16:47:13 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:56.371 make[1]: Nothing to be done for 'all'. 00:02:08.582 The Meson build system 00:02:08.582 Version: 1.5.0 00:02:08.582 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:08.582 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:08.582 Build type: native build 00:02:08.582 Program cat found: YES (/usr/bin/cat) 00:02:08.582 Project name: DPDK 00:02:08.582 Project version: 24.03.0 00:02:08.582 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:08.582 C linker for the host machine: cc ld.bfd 2.40-14 00:02:08.582 Host machine cpu family: x86_64 00:02:08.582 Host machine cpu: x86_64 00:02:08.582 Message: ## Building in Developer Mode ## 00:02:08.582 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:08.582 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:08.582 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:08.582 Program python3 found: YES (/usr/bin/python3) 00:02:08.582 Program cat found: YES (/usr/bin/cat) 00:02:08.582 Compiler for C supports arguments -march=native: YES 00:02:08.582 Checking for size of "void *" : 8 00:02:08.582 Checking for size of "void *" : 8 (cached) 00:02:08.582 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:08.582 Library m found: YES 00:02:08.582 Library numa found: YES 00:02:08.582 Has header "numaif.h" : YES 00:02:08.582 Library fdt found: NO 00:02:08.582 Library execinfo found: NO 00:02:08.582 Has header "execinfo.h" : YES 00:02:08.582 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:08.582 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:08.582 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:08.582 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:08.582 Run-time dependency openssl found: YES 3.1.1 00:02:08.582 Run-time dependency libpcap found: YES 1.10.4 00:02:08.582 Has header "pcap.h" with dependency libpcap: YES 00:02:08.582 Compiler for C supports arguments -Wcast-qual: YES 00:02:08.582 Compiler for C supports arguments -Wdeprecated: YES 00:02:08.582 Compiler for C supports arguments -Wformat: YES 00:02:08.582 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:08.582 Compiler for C supports arguments -Wformat-security: NO 00:02:08.582 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:08.582 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:08.582 Compiler for C supports arguments -Wnested-externs: YES 00:02:08.582 Compiler for C supports arguments -Wold-style-definition: YES 00:02:08.582 Compiler for C supports arguments -Wpointer-arith: YES 00:02:08.582 Compiler for C supports arguments -Wsign-compare: YES 00:02:08.582 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:08.582 Compiler for C supports arguments -Wundef: YES 00:02:08.582 Compiler for C supports arguments -Wwrite-strings: YES 00:02:08.582 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:08.582 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:08.582 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:08.582 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:08.582 Program objdump found: YES (/usr/bin/objdump) 00:02:08.582 Compiler for C supports arguments -mavx512f: YES 00:02:08.582 Checking if "AVX512 checking" compiles: YES 00:02:08.582 Fetching value of define "__SSE4_2__" : 1 00:02:08.582 Fetching value of define "__AES__" : 1 00:02:08.582 Fetching value of define "__AVX__" : 1 00:02:08.582 Fetching value of define "__AVX2__" : 1 00:02:08.582 Fetching value of define "__AVX512BW__" : (undefined) 00:02:08.582 Fetching value of define "__AVX512CD__" : (undefined) 00:02:08.582 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:08.582 Fetching value of define "__AVX512F__" : (undefined) 00:02:08.582 Fetching value of define "__AVX512VL__" : (undefined) 00:02:08.582 Fetching value of define "__PCLMUL__" : 1 00:02:08.582 Fetching value of define "__RDRND__" : 1 00:02:08.582 Fetching value of define "__RDSEED__" : 1 00:02:08.582 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:08.582 Fetching value of define "__znver1__" : (undefined) 00:02:08.582 Fetching value of define "__znver2__" : (undefined) 00:02:08.582 Fetching value of define "__znver3__" : (undefined) 00:02:08.582 Fetching value of define "__znver4__" : (undefined) 00:02:08.582 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:08.582 Message: lib/log: Defining dependency "log" 00:02:08.582 Message: lib/kvargs: Defining dependency "kvargs" 00:02:08.582 Message: lib/telemetry: Defining dependency "telemetry" 00:02:08.582 Checking for function "getentropy" : NO 00:02:08.582 Message: lib/eal: Defining dependency "eal" 00:02:08.582 Message: lib/ring: Defining dependency "ring" 00:02:08.582 Message: lib/rcu: Defining dependency "rcu" 00:02:08.582 Message: lib/mempool: Defining dependency "mempool" 00:02:08.582 Message: lib/mbuf: Defining dependency "mbuf" 00:02:08.582 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:08.582 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:08.582 Compiler for C supports arguments -mpclmul: YES 00:02:08.582 Compiler for C supports arguments -maes: YES 00:02:08.582 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:08.582 Compiler for C supports arguments -mavx512bw: YES 00:02:08.582 Compiler for C supports arguments -mavx512dq: YES 00:02:08.582 Compiler for C supports arguments -mavx512vl: YES 00:02:08.582 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:08.582 Compiler for C supports arguments -mavx2: YES 00:02:08.582 Compiler for C supports arguments -mavx: YES 00:02:08.582 Message: lib/net: Defining dependency "net" 00:02:08.582 Message: lib/meter: Defining dependency "meter" 00:02:08.582 Message: lib/ethdev: Defining dependency "ethdev" 00:02:08.582 Message: lib/pci: Defining dependency "pci" 00:02:08.582 Message: lib/cmdline: Defining dependency "cmdline" 00:02:08.582 Message: lib/hash: Defining dependency "hash" 00:02:08.582 Message: lib/timer: Defining dependency "timer" 00:02:08.582 Message: lib/compressdev: Defining dependency "compressdev" 00:02:08.582 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:08.582 Message: lib/dmadev: Defining dependency "dmadev" 00:02:08.582 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:08.582 Message: lib/power: Defining dependency "power" 00:02:08.582 Message: lib/reorder: Defining dependency "reorder" 00:02:08.582 Message: lib/security: Defining dependency "security" 00:02:08.582 Has header "linux/userfaultfd.h" : YES 00:02:08.582 Has header "linux/vduse.h" : YES 00:02:08.582 Message: lib/vhost: Defining dependency "vhost" 00:02:08.582 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:08.582 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:08.582 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:08.582 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:08.582 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:08.582 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:08.582 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:08.582 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:08.582 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:08.582 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:08.582 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:08.582 Configuring doxy-api-html.conf using configuration 00:02:08.582 Configuring doxy-api-man.conf using configuration 00:02:08.582 Program mandb found: YES (/usr/bin/mandb) 00:02:08.582 Program sphinx-build found: NO 00:02:08.582 Configuring rte_build_config.h using configuration 00:02:08.582 Message: 00:02:08.582 ================= 00:02:08.582 Applications Enabled 00:02:08.582 ================= 00:02:08.582 00:02:08.582 apps: 00:02:08.582 00:02:08.582 00:02:08.582 Message: 00:02:08.582 ================= 00:02:08.582 Libraries Enabled 00:02:08.582 ================= 00:02:08.582 00:02:08.582 libs: 00:02:08.582 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:08.582 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:08.582 cryptodev, dmadev, power, reorder, security, vhost, 00:02:08.582 00:02:08.582 Message: 00:02:08.582 =============== 00:02:08.582 Drivers Enabled 00:02:08.582 =============== 00:02:08.582 00:02:08.582 common: 00:02:08.582 00:02:08.582 bus: 00:02:08.582 pci, vdev, 00:02:08.582 mempool: 00:02:08.582 ring, 00:02:08.582 dma: 00:02:08.582 00:02:08.582 net: 00:02:08.582 00:02:08.582 crypto: 00:02:08.582 00:02:08.582 compress: 00:02:08.582 00:02:08.582 vdpa: 00:02:08.583 00:02:08.583 00:02:08.583 Message: 00:02:08.583 ================= 00:02:08.583 Content Skipped 00:02:08.583 ================= 00:02:08.583 00:02:08.583 apps: 00:02:08.583 dumpcap: explicitly disabled via build config 00:02:08.583 graph: explicitly disabled via build config 00:02:08.583 pdump: explicitly disabled via build config 00:02:08.583 proc-info: explicitly disabled via build config 00:02:08.583 test-acl: explicitly disabled via build config 00:02:08.583 test-bbdev: explicitly disabled via build config 00:02:08.583 test-cmdline: explicitly disabled via build config 00:02:08.583 test-compress-perf: explicitly disabled via build config 00:02:08.583 test-crypto-perf: explicitly disabled via build config 00:02:08.583 test-dma-perf: explicitly disabled via build config 00:02:08.583 test-eventdev: explicitly disabled via build config 00:02:08.583 test-fib: explicitly disabled via build config 00:02:08.583 test-flow-perf: explicitly disabled via build config 00:02:08.583 test-gpudev: explicitly disabled via build config 00:02:08.583 test-mldev: explicitly disabled via build config 00:02:08.583 test-pipeline: explicitly disabled via build config 00:02:08.583 test-pmd: explicitly disabled via build config 00:02:08.583 test-regex: explicitly disabled via build config 00:02:08.583 test-sad: explicitly disabled via build config 00:02:08.583 test-security-perf: explicitly disabled via build config 00:02:08.583 00:02:08.583 libs: 00:02:08.583 argparse: explicitly disabled via build config 00:02:08.583 metrics: explicitly disabled via build config 00:02:08.583 acl: explicitly disabled via build config 00:02:08.583 bbdev: explicitly disabled via build config 00:02:08.583 bitratestats: explicitly disabled via build config 00:02:08.583 bpf: explicitly disabled via build config 00:02:08.583 cfgfile: explicitly disabled via build config 00:02:08.583 distributor: explicitly disabled via build config 00:02:08.583 efd: explicitly disabled via build config 00:02:08.583 eventdev: explicitly disabled via build config 00:02:08.583 dispatcher: explicitly disabled via build config 00:02:08.583 gpudev: explicitly disabled via build config 00:02:08.583 gro: explicitly disabled via build config 00:02:08.583 gso: explicitly disabled via build config 00:02:08.583 ip_frag: explicitly disabled via build config 00:02:08.583 jobstats: explicitly disabled via build config 00:02:08.583 latencystats: explicitly disabled via build config 00:02:08.583 lpm: explicitly disabled via build config 00:02:08.583 member: explicitly disabled via build config 00:02:08.583 pcapng: explicitly disabled via build config 00:02:08.583 rawdev: explicitly disabled via build config 00:02:08.583 regexdev: explicitly disabled via build config 00:02:08.583 mldev: explicitly disabled via build config 00:02:08.583 rib: explicitly disabled via build config 00:02:08.583 sched: explicitly disabled via build config 00:02:08.583 stack: explicitly disabled via build config 00:02:08.583 ipsec: explicitly disabled via build config 00:02:08.583 pdcp: explicitly disabled via build config 00:02:08.583 fib: explicitly disabled via build config 00:02:08.583 port: explicitly disabled via build config 00:02:08.583 pdump: explicitly disabled via build config 00:02:08.583 table: explicitly disabled via build config 00:02:08.583 pipeline: explicitly disabled via build config 00:02:08.583 graph: explicitly disabled via build config 00:02:08.583 node: explicitly disabled via build config 00:02:08.583 00:02:08.583 drivers: 00:02:08.583 common/cpt: not in enabled drivers build config 00:02:08.583 common/dpaax: not in enabled drivers build config 00:02:08.583 common/iavf: not in enabled drivers build config 00:02:08.583 common/idpf: not in enabled drivers build config 00:02:08.583 common/ionic: not in enabled drivers build config 00:02:08.583 common/mvep: not in enabled drivers build config 00:02:08.583 common/octeontx: not in enabled drivers build config 00:02:08.583 bus/auxiliary: not in enabled drivers build config 00:02:08.583 bus/cdx: not in enabled drivers build config 00:02:08.583 bus/dpaa: not in enabled drivers build config 00:02:08.583 bus/fslmc: not in enabled drivers build config 00:02:08.583 bus/ifpga: not in enabled drivers build config 00:02:08.583 bus/platform: not in enabled drivers build config 00:02:08.583 bus/uacce: not in enabled drivers build config 00:02:08.583 bus/vmbus: not in enabled drivers build config 00:02:08.583 common/cnxk: not in enabled drivers build config 00:02:08.583 common/mlx5: not in enabled drivers build config 00:02:08.583 common/nfp: not in enabled drivers build config 00:02:08.583 common/nitrox: not in enabled drivers build config 00:02:08.583 common/qat: not in enabled drivers build config 00:02:08.583 common/sfc_efx: not in enabled drivers build config 00:02:08.583 mempool/bucket: not in enabled drivers build config 00:02:08.583 mempool/cnxk: not in enabled drivers build config 00:02:08.583 mempool/dpaa: not in enabled drivers build config 00:02:08.583 mempool/dpaa2: not in enabled drivers build config 00:02:08.583 mempool/octeontx: not in enabled drivers build config 00:02:08.583 mempool/stack: not in enabled drivers build config 00:02:08.583 dma/cnxk: not in enabled drivers build config 00:02:08.583 dma/dpaa: not in enabled drivers build config 00:02:08.583 dma/dpaa2: not in enabled drivers build config 00:02:08.583 dma/hisilicon: not in enabled drivers build config 00:02:08.583 dma/idxd: not in enabled drivers build config 00:02:08.583 dma/ioat: not in enabled drivers build config 00:02:08.583 dma/skeleton: not in enabled drivers build config 00:02:08.583 net/af_packet: not in enabled drivers build config 00:02:08.583 net/af_xdp: not in enabled drivers build config 00:02:08.583 net/ark: not in enabled drivers build config 00:02:08.583 net/atlantic: not in enabled drivers build config 00:02:08.583 net/avp: not in enabled drivers build config 00:02:08.583 net/axgbe: not in enabled drivers build config 00:02:08.583 net/bnx2x: not in enabled drivers build config 00:02:08.583 net/bnxt: not in enabled drivers build config 00:02:08.583 net/bonding: not in enabled drivers build config 00:02:08.583 net/cnxk: not in enabled drivers build config 00:02:08.583 net/cpfl: not in enabled drivers build config 00:02:08.583 net/cxgbe: not in enabled drivers build config 00:02:08.583 net/dpaa: not in enabled drivers build config 00:02:08.583 net/dpaa2: not in enabled drivers build config 00:02:08.583 net/e1000: not in enabled drivers build config 00:02:08.583 net/ena: not in enabled drivers build config 00:02:08.583 net/enetc: not in enabled drivers build config 00:02:08.583 net/enetfec: not in enabled drivers build config 00:02:08.583 net/enic: not in enabled drivers build config 00:02:08.583 net/failsafe: not in enabled drivers build config 00:02:08.583 net/fm10k: not in enabled drivers build config 00:02:08.583 net/gve: not in enabled drivers build config 00:02:08.583 net/hinic: not in enabled drivers build config 00:02:08.583 net/hns3: not in enabled drivers build config 00:02:08.583 net/i40e: not in enabled drivers build config 00:02:08.583 net/iavf: not in enabled drivers build config 00:02:08.583 net/ice: not in enabled drivers build config 00:02:08.583 net/idpf: not in enabled drivers build config 00:02:08.583 net/igc: not in enabled drivers build config 00:02:08.583 net/ionic: not in enabled drivers build config 00:02:08.583 net/ipn3ke: not in enabled drivers build config 00:02:08.583 net/ixgbe: not in enabled drivers build config 00:02:08.583 net/mana: not in enabled drivers build config 00:02:08.583 net/memif: not in enabled drivers build config 00:02:08.583 net/mlx4: not in enabled drivers build config 00:02:08.583 net/mlx5: not in enabled drivers build config 00:02:08.583 net/mvneta: not in enabled drivers build config 00:02:08.583 net/mvpp2: not in enabled drivers build config 00:02:08.583 net/netvsc: not in enabled drivers build config 00:02:08.583 net/nfb: not in enabled drivers build config 00:02:08.583 net/nfp: not in enabled drivers build config 00:02:08.583 net/ngbe: not in enabled drivers build config 00:02:08.583 net/null: not in enabled drivers build config 00:02:08.583 net/octeontx: not in enabled drivers build config 00:02:08.583 net/octeon_ep: not in enabled drivers build config 00:02:08.583 net/pcap: not in enabled drivers build config 00:02:08.583 net/pfe: not in enabled drivers build config 00:02:08.583 net/qede: not in enabled drivers build config 00:02:08.583 net/ring: not in enabled drivers build config 00:02:08.583 net/sfc: not in enabled drivers build config 00:02:08.583 net/softnic: not in enabled drivers build config 00:02:08.583 net/tap: not in enabled drivers build config 00:02:08.583 net/thunderx: not in enabled drivers build config 00:02:08.583 net/txgbe: not in enabled drivers build config 00:02:08.583 net/vdev_netvsc: not in enabled drivers build config 00:02:08.583 net/vhost: not in enabled drivers build config 00:02:08.583 net/virtio: not in enabled drivers build config 00:02:08.583 net/vmxnet3: not in enabled drivers build config 00:02:08.583 raw/*: missing internal dependency, "rawdev" 00:02:08.583 crypto/armv8: not in enabled drivers build config 00:02:08.583 crypto/bcmfs: not in enabled drivers build config 00:02:08.583 crypto/caam_jr: not in enabled drivers build config 00:02:08.583 crypto/ccp: not in enabled drivers build config 00:02:08.583 crypto/cnxk: not in enabled drivers build config 00:02:08.583 crypto/dpaa_sec: not in enabled drivers build config 00:02:08.583 crypto/dpaa2_sec: not in enabled drivers build config 00:02:08.583 crypto/ipsec_mb: not in enabled drivers build config 00:02:08.583 crypto/mlx5: not in enabled drivers build config 00:02:08.583 crypto/mvsam: not in enabled drivers build config 00:02:08.583 crypto/nitrox: not in enabled drivers build config 00:02:08.583 crypto/null: not in enabled drivers build config 00:02:08.583 crypto/octeontx: not in enabled drivers build config 00:02:08.583 crypto/openssl: not in enabled drivers build config 00:02:08.583 crypto/scheduler: not in enabled drivers build config 00:02:08.583 crypto/uadk: not in enabled drivers build config 00:02:08.583 crypto/virtio: not in enabled drivers build config 00:02:08.583 compress/isal: not in enabled drivers build config 00:02:08.583 compress/mlx5: not in enabled drivers build config 00:02:08.583 compress/nitrox: not in enabled drivers build config 00:02:08.583 compress/octeontx: not in enabled drivers build config 00:02:08.583 compress/zlib: not in enabled drivers build config 00:02:08.583 regex/*: missing internal dependency, "regexdev" 00:02:08.583 ml/*: missing internal dependency, "mldev" 00:02:08.583 vdpa/ifc: not in enabled drivers build config 00:02:08.583 vdpa/mlx5: not in enabled drivers build config 00:02:08.583 vdpa/nfp: not in enabled drivers build config 00:02:08.583 vdpa/sfc: not in enabled drivers build config 00:02:08.583 event/*: missing internal dependency, "eventdev" 00:02:08.584 baseband/*: missing internal dependency, "bbdev" 00:02:08.584 gpu/*: missing internal dependency, "gpudev" 00:02:08.584 00:02:08.584 00:02:08.584 Build targets in project: 85 00:02:08.584 00:02:08.584 DPDK 24.03.0 00:02:08.584 00:02:08.584 User defined options 00:02:08.584 buildtype : debug 00:02:08.584 default_library : shared 00:02:08.584 libdir : lib 00:02:08.584 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:08.584 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:08.584 c_link_args : 00:02:08.584 cpu_instruction_set: native 00:02:08.584 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:08.584 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:08.584 enable_docs : false 00:02:08.584 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:08.584 enable_kmods : false 00:02:08.584 max_lcores : 128 00:02:08.584 tests : false 00:02:08.584 00:02:08.584 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:08.584 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:08.584 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:08.584 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:08.584 [3/268] Linking static target lib/librte_kvargs.a 00:02:08.584 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:08.584 [5/268] Linking static target lib/librte_log.a 00:02:08.584 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:08.842 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.099 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:09.100 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:09.100 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:09.100 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:09.100 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:09.100 [13/268] Linking static target lib/librte_telemetry.a 00:02:09.100 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:09.358 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:09.358 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:09.358 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:09.358 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.617 [19/268] Linking target lib/librte_log.so.24.1 00:02:09.617 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:09.876 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:09.876 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:10.203 [23/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.203 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:10.203 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:10.203 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:10.203 [27/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:10.203 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:10.203 [29/268] Linking target lib/librte_telemetry.so.24.1 00:02:10.203 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:10.203 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:10.203 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:10.203 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:10.464 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:10.464 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:10.724 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:10.724 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:10.982 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:11.240 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:11.240 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:11.240 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:11.240 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:11.240 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:11.240 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:11.240 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:11.240 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:11.499 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:11.499 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:11.757 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:11.757 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:11.757 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:12.016 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:12.275 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:12.275 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:12.275 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:12.275 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:12.275 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:12.275 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:12.275 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:12.534 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:12.793 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:12.793 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:13.052 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:13.052 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:13.052 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:13.052 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:13.052 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:13.310 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:13.310 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:13.310 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:13.568 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:13.568 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:13.826 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:13.826 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:13.826 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:13.826 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:13.826 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:14.084 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:14.084 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:14.084 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:14.084 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:14.084 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:14.680 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:14.680 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:14.680 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:14.680 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:14.680 [87/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:14.680 [88/268] Linking static target lib/librte_rcu.a 00:02:14.680 [89/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:14.680 [90/268] Linking static target lib/librte_eal.a 00:02:14.680 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:14.939 [92/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:14.939 [93/268] Linking static target lib/librte_ring.a 00:02:14.939 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:14.939 [95/268] Linking static target lib/librte_mempool.a 00:02:14.939 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:14.939 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:15.197 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:15.197 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:15.197 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.455 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:15.456 [102/268] Linking static target lib/librte_mbuf.a 00:02:15.456 [103/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.456 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:15.456 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:15.714 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:15.714 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:15.714 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:15.714 [109/268] Linking static target lib/librte_net.a 00:02:15.973 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:15.973 [111/268] Linking static target lib/librte_meter.a 00:02:16.232 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:16.232 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.232 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:16.232 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:16.232 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.492 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.492 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:16.492 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.059 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:17.059 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:17.059 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:17.318 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:17.577 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:17.577 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:17.577 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:17.577 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:17.577 [128/268] Linking static target lib/librte_pci.a 00:02:17.577 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:17.577 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:17.577 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:17.577 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:17.835 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:17.835 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:17.835 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:17.835 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:17.835 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:17.835 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:17.835 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:17.835 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:17.835 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:17.835 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:18.094 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:18.094 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:18.094 [145/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.094 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:18.094 [147/268] Linking static target lib/librte_ethdev.a 00:02:18.352 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:18.352 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:18.611 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:18.611 [151/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:18.611 [152/268] Linking static target lib/librte_cmdline.a 00:02:18.611 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:18.869 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:18.869 [155/268] Linking static target lib/librte_timer.a 00:02:18.869 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:18.869 [157/268] Linking static target lib/librte_hash.a 00:02:18.869 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:19.127 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:19.127 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:19.127 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:19.127 [162/268] Linking static target lib/librte_compressdev.a 00:02:19.694 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.694 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:19.694 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:19.694 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:19.694 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:19.694 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:19.694 [169/268] Linking static target lib/librte_dmadev.a 00:02:20.263 [170/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.263 [171/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.263 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:20.263 [173/268] Linking static target lib/librte_cryptodev.a 00:02:20.263 [174/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:20.263 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:20.263 [176/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:20.263 [177/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.522 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:20.780 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.780 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:20.780 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:20.780 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:21.038 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:21.039 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:21.297 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:21.297 [186/268] Linking static target lib/librte_power.a 00:02:21.297 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:21.297 [188/268] Linking static target lib/librte_reorder.a 00:02:21.555 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:21.555 [190/268] Linking static target lib/librte_security.a 00:02:21.555 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:21.555 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:21.812 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:21.812 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:22.070 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.329 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.329 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.587 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:22.587 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:22.587 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:22.846 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:22.846 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.105 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:23.105 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:23.363 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:23.363 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:23.363 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:23.363 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:23.363 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:23.622 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:23.622 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:23.622 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:23.622 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:23.881 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.881 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.881 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:23.881 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:23.881 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.881 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.881 [220/268] Linking static target drivers/librte_bus_pci.a 00:02:23.881 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:23.881 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:24.140 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.140 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:24.140 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.140 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.140 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:24.399 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.658 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:24.917 [230/268] Linking static target lib/librte_vhost.a 00:02:25.852 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.852 [232/268] Linking target lib/librte_eal.so.24.1 00:02:26.111 [233/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.111 [234/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.111 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:26.111 [236/268] Linking target lib/librte_pci.so.24.1 00:02:26.111 [237/268] Linking target lib/librte_timer.so.24.1 00:02:26.111 [238/268] Linking target lib/librte_meter.so.24.1 00:02:26.111 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:26.111 [240/268] Linking target lib/librte_ring.so.24.1 00:02:26.111 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:26.378 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:26.378 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:26.378 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:26.378 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:26.378 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:26.378 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:26.378 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:26.378 [249/268] Linking target lib/librte_rcu.so.24.1 00:02:26.643 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:26.643 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:26.643 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:26.643 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:26.643 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:26.902 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:26.902 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:26.902 [257/268] Linking target lib/librte_net.so.24.1 00:02:26.902 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:26.902 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:26.902 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:26.902 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:26.902 [262/268] Linking target lib/librte_security.so.24.1 00:02:26.902 [263/268] Linking target lib/librte_hash.so.24.1 00:02:27.161 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:27.161 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:27.161 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:27.161 [267/268] Linking target lib/librte_power.so.24.1 00:02:27.420 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:27.420 INFO: autodetecting backend as ninja 00:02:27.420 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:53.962 CC lib/ut_mock/mock.o 00:02:53.962 CC lib/log/log.o 00:02:53.962 CC lib/log/log_deprecated.o 00:02:53.962 CC lib/log/log_flags.o 00:02:53.962 CC lib/ut/ut.o 00:02:53.962 LIB libspdk_ut_mock.a 00:02:53.962 SO libspdk_ut_mock.so.6.0 00:02:53.962 LIB libspdk_log.a 00:02:53.962 LIB libspdk_ut.a 00:02:53.963 SO libspdk_ut.so.2.0 00:02:53.963 SO libspdk_log.so.7.1 00:02:53.963 SYMLINK libspdk_ut_mock.so 00:02:53.963 SYMLINK libspdk_ut.so 00:02:53.963 SYMLINK libspdk_log.so 00:02:53.963 CC lib/util/base64.o 00:02:53.963 CC lib/util/bit_array.o 00:02:53.963 CC lib/util/cpuset.o 00:02:53.963 CC lib/util/crc16.o 00:02:53.963 CC lib/util/crc32.o 00:02:53.963 CC lib/util/crc32c.o 00:02:53.963 CC lib/dma/dma.o 00:02:53.963 CXX lib/trace_parser/trace.o 00:02:53.963 CC lib/ioat/ioat.o 00:02:53.963 CC lib/util/crc32_ieee.o 00:02:53.963 CC lib/vfio_user/host/vfio_user_pci.o 00:02:53.963 CC lib/util/crc64.o 00:02:53.963 CC lib/util/dif.o 00:02:53.963 CC lib/util/fd.o 00:02:53.963 CC lib/util/fd_group.o 00:02:53.963 LIB libspdk_dma.a 00:02:53.963 CC lib/util/file.o 00:02:53.963 SO libspdk_dma.so.5.0 00:02:53.963 CC lib/util/hexlify.o 00:02:53.963 CC lib/vfio_user/host/vfio_user.o 00:02:53.963 SYMLINK libspdk_dma.so 00:02:53.963 CC lib/util/iov.o 00:02:53.963 CC lib/util/math.o 00:02:53.963 LIB libspdk_ioat.a 00:02:53.963 CC lib/util/net.o 00:02:53.963 CC lib/util/pipe.o 00:02:53.963 SO libspdk_ioat.so.7.0 00:02:53.963 CC lib/util/strerror_tls.o 00:02:53.963 CC lib/util/string.o 00:02:53.963 CC lib/util/uuid.o 00:02:53.963 LIB libspdk_vfio_user.a 00:02:53.963 SYMLINK libspdk_ioat.so 00:02:53.963 CC lib/util/xor.o 00:02:53.963 SO libspdk_vfio_user.so.5.0 00:02:53.963 CC lib/util/zipf.o 00:02:53.963 CC lib/util/md5.o 00:02:53.963 SYMLINK libspdk_vfio_user.so 00:02:53.963 LIB libspdk_util.a 00:02:53.963 SO libspdk_util.so.10.1 00:02:53.963 LIB libspdk_trace_parser.a 00:02:53.963 SO libspdk_trace_parser.so.6.0 00:02:53.963 SYMLINK libspdk_util.so 00:02:53.963 SYMLINK libspdk_trace_parser.so 00:02:53.963 CC lib/rdma_utils/rdma_utils.o 00:02:53.963 CC lib/conf/conf.o 00:02:53.963 CC lib/vmd/vmd.o 00:02:53.963 CC lib/vmd/led.o 00:02:53.963 CC lib/json/json_parse.o 00:02:53.963 CC lib/json/json_util.o 00:02:53.963 CC lib/json/json_write.o 00:02:53.963 CC lib/env_dpdk/env.o 00:02:53.963 CC lib/env_dpdk/memory.o 00:02:53.963 CC lib/idxd/idxd.o 00:02:54.221 CC lib/idxd/idxd_user.o 00:02:54.221 CC lib/idxd/idxd_kernel.o 00:02:54.221 CC lib/env_dpdk/pci.o 00:02:54.221 LIB libspdk_rdma_utils.a 00:02:54.221 SO libspdk_rdma_utils.so.1.0 00:02:54.221 LIB libspdk_conf.a 00:02:54.221 LIB libspdk_json.a 00:02:54.221 SO libspdk_conf.so.6.0 00:02:54.480 SO libspdk_json.so.6.0 00:02:54.480 SYMLINK libspdk_rdma_utils.so 00:02:54.480 SYMLINK libspdk_conf.so 00:02:54.480 CC lib/env_dpdk/init.o 00:02:54.480 SYMLINK libspdk_json.so 00:02:54.480 CC lib/env_dpdk/threads.o 00:02:54.480 CC lib/env_dpdk/pci_ioat.o 00:02:54.480 CC lib/rdma_provider/common.o 00:02:54.739 CC lib/env_dpdk/pci_virtio.o 00:02:54.739 CC lib/jsonrpc/jsonrpc_server.o 00:02:54.739 CC lib/env_dpdk/pci_vmd.o 00:02:54.739 CC lib/env_dpdk/pci_idxd.o 00:02:54.739 LIB libspdk_idxd.a 00:02:54.739 LIB libspdk_vmd.a 00:02:54.739 SO libspdk_idxd.so.12.1 00:02:54.739 SO libspdk_vmd.so.6.0 00:02:54.739 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:54.739 CC lib/env_dpdk/pci_event.o 00:02:54.739 CC lib/env_dpdk/sigbus_handler.o 00:02:54.739 SYMLINK libspdk_idxd.so 00:02:54.739 CC lib/env_dpdk/pci_dpdk.o 00:02:54.739 SYMLINK libspdk_vmd.so 00:02:54.739 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:54.739 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:54.739 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:54.739 CC lib/jsonrpc/jsonrpc_client.o 00:02:54.739 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:54.997 LIB libspdk_rdma_provider.a 00:02:54.997 SO libspdk_rdma_provider.so.7.0 00:02:54.997 SYMLINK libspdk_rdma_provider.so 00:02:55.255 LIB libspdk_jsonrpc.a 00:02:55.255 SO libspdk_jsonrpc.so.6.0 00:02:55.255 SYMLINK libspdk_jsonrpc.so 00:02:55.514 CC lib/rpc/rpc.o 00:02:55.772 LIB libspdk_env_dpdk.a 00:02:55.772 LIB libspdk_rpc.a 00:02:55.772 SO libspdk_env_dpdk.so.15.1 00:02:55.772 SO libspdk_rpc.so.6.0 00:02:56.077 SYMLINK libspdk_rpc.so 00:02:56.077 SYMLINK libspdk_env_dpdk.so 00:02:56.077 CC lib/keyring/keyring_rpc.o 00:02:56.077 CC lib/keyring/keyring.o 00:02:56.077 CC lib/notify/notify.o 00:02:56.077 CC lib/notify/notify_rpc.o 00:02:56.077 CC lib/trace/trace.o 00:02:56.077 CC lib/trace/trace_flags.o 00:02:56.077 CC lib/trace/trace_rpc.o 00:02:56.364 LIB libspdk_keyring.a 00:02:56.364 LIB libspdk_notify.a 00:02:56.364 SO libspdk_keyring.so.2.0 00:02:56.364 SO libspdk_notify.so.6.0 00:02:56.364 LIB libspdk_trace.a 00:02:56.622 SO libspdk_trace.so.11.0 00:02:56.622 SYMLINK libspdk_notify.so 00:02:56.622 SYMLINK libspdk_keyring.so 00:02:56.622 SYMLINK libspdk_trace.so 00:02:56.880 CC lib/sock/sock.o 00:02:56.880 CC lib/sock/sock_rpc.o 00:02:56.880 CC lib/thread/thread.o 00:02:56.880 CC lib/thread/iobuf.o 00:02:57.137 LIB libspdk_sock.a 00:02:57.395 SO libspdk_sock.so.10.0 00:02:57.396 SYMLINK libspdk_sock.so 00:02:57.654 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:57.654 CC lib/nvme/nvme_ctrlr.o 00:02:57.654 CC lib/nvme/nvme_fabric.o 00:02:57.654 CC lib/nvme/nvme.o 00:02:57.654 CC lib/nvme/nvme_pcie.o 00:02:57.654 CC lib/nvme/nvme_qpair.o 00:02:57.654 CC lib/nvme/nvme_ns.o 00:02:57.654 CC lib/nvme/nvme_ns_cmd.o 00:02:57.654 CC lib/nvme/nvme_pcie_common.o 00:02:58.590 LIB libspdk_thread.a 00:02:58.590 SO libspdk_thread.so.11.0 00:02:58.590 CC lib/nvme/nvme_quirks.o 00:02:58.590 CC lib/nvme/nvme_transport.o 00:02:58.590 SYMLINK libspdk_thread.so 00:02:58.590 CC lib/nvme/nvme_discovery.o 00:02:58.590 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:58.590 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:58.849 CC lib/accel/accel.o 00:02:58.849 CC lib/blob/blobstore.o 00:02:59.111 CC lib/init/json_config.o 00:02:59.111 CC lib/nvme/nvme_tcp.o 00:02:59.111 CC lib/nvme/nvme_opal.o 00:02:59.369 CC lib/nvme/nvme_io_msg.o 00:02:59.369 CC lib/init/subsystem.o 00:02:59.369 CC lib/virtio/virtio.o 00:02:59.369 CC lib/nvme/nvme_poll_group.o 00:02:59.369 CC lib/fsdev/fsdev.o 00:02:59.369 CC lib/init/subsystem_rpc.o 00:02:59.627 CC lib/init/rpc.o 00:02:59.627 CC lib/fsdev/fsdev_io.o 00:02:59.886 CC lib/virtio/virtio_vhost_user.o 00:02:59.886 LIB libspdk_init.a 00:02:59.886 SO libspdk_init.so.6.0 00:02:59.886 SYMLINK libspdk_init.so 00:02:59.886 CC lib/virtio/virtio_vfio_user.o 00:02:59.886 CC lib/virtio/virtio_pci.o 00:03:00.144 CC lib/accel/accel_rpc.o 00:03:00.144 CC lib/blob/request.o 00:03:00.144 CC lib/fsdev/fsdev_rpc.o 00:03:00.144 CC lib/event/app.o 00:03:00.144 CC lib/nvme/nvme_zns.o 00:03:00.144 CC lib/event/reactor.o 00:03:00.144 CC lib/accel/accel_sw.o 00:03:00.144 CC lib/nvme/nvme_stubs.o 00:03:00.407 LIB libspdk_virtio.a 00:03:00.407 SO libspdk_virtio.so.7.0 00:03:00.407 LIB libspdk_fsdev.a 00:03:00.407 SO libspdk_fsdev.so.2.0 00:03:00.407 SYMLINK libspdk_virtio.so 00:03:00.407 CC lib/event/log_rpc.o 00:03:00.407 CC lib/blob/zeroes.o 00:03:00.407 SYMLINK libspdk_fsdev.so 00:03:00.407 CC lib/blob/blob_bs_dev.o 00:03:00.665 CC lib/event/app_rpc.o 00:03:00.665 LIB libspdk_accel.a 00:03:00.665 CC lib/event/scheduler_static.o 00:03:00.665 CC lib/nvme/nvme_auth.o 00:03:00.665 SO libspdk_accel.so.16.0 00:03:00.665 CC lib/nvme/nvme_cuse.o 00:03:00.665 SYMLINK libspdk_accel.so 00:03:00.666 CC lib/nvme/nvme_rdma.o 00:03:00.927 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:00.927 CC lib/bdev/bdev.o 00:03:00.927 CC lib/bdev/bdev_rpc.o 00:03:00.927 CC lib/bdev/bdev_zone.o 00:03:00.927 LIB libspdk_event.a 00:03:00.927 CC lib/bdev/part.o 00:03:00.927 SO libspdk_event.so.14.0 00:03:01.185 SYMLINK libspdk_event.so 00:03:01.185 CC lib/bdev/scsi_nvme.o 00:03:01.754 LIB libspdk_fuse_dispatcher.a 00:03:01.754 SO libspdk_fuse_dispatcher.so.1.0 00:03:01.754 SYMLINK libspdk_fuse_dispatcher.so 00:03:02.012 LIB libspdk_nvme.a 00:03:02.272 LIB libspdk_blob.a 00:03:02.272 SO libspdk_nvme.so.15.0 00:03:02.272 SO libspdk_blob.so.11.0 00:03:02.531 SYMLINK libspdk_blob.so 00:03:02.531 SYMLINK libspdk_nvme.so 00:03:02.790 CC lib/blobfs/blobfs.o 00:03:02.790 CC lib/blobfs/tree.o 00:03:02.790 CC lib/lvol/lvol.o 00:03:03.727 LIB libspdk_blobfs.a 00:03:03.727 LIB libspdk_lvol.a 00:03:03.727 SO libspdk_blobfs.so.10.0 00:03:03.727 SO libspdk_lvol.so.10.0 00:03:03.727 SYMLINK libspdk_blobfs.so 00:03:03.727 SYMLINK libspdk_lvol.so 00:03:03.989 LIB libspdk_bdev.a 00:03:03.989 SO libspdk_bdev.so.17.0 00:03:03.989 SYMLINK libspdk_bdev.so 00:03:04.253 CC lib/nvmf/ctrlr.o 00:03:04.253 CC lib/ftl/ftl_layout.o 00:03:04.253 CC lib/nvmf/ctrlr_discovery.o 00:03:04.253 CC lib/ftl/ftl_core.o 00:03:04.253 CC lib/ftl/ftl_init.o 00:03:04.253 CC lib/nvmf/subsystem.o 00:03:04.253 CC lib/nvmf/ctrlr_bdev.o 00:03:04.253 CC lib/ublk/ublk.o 00:03:04.253 CC lib/nbd/nbd.o 00:03:04.253 CC lib/scsi/dev.o 00:03:04.820 CC lib/ftl/ftl_debug.o 00:03:04.820 CC lib/scsi/lun.o 00:03:04.820 CC lib/ublk/ublk_rpc.o 00:03:04.820 CC lib/nbd/nbd_rpc.o 00:03:04.820 CC lib/ftl/ftl_io.o 00:03:04.821 CC lib/ftl/ftl_sb.o 00:03:05.079 CC lib/scsi/port.o 00:03:05.079 LIB libspdk_nbd.a 00:03:05.079 CC lib/nvmf/nvmf.o 00:03:05.079 LIB libspdk_ublk.a 00:03:05.079 SO libspdk_nbd.so.7.0 00:03:05.079 SO libspdk_ublk.so.3.0 00:03:05.079 SYMLINK libspdk_nbd.so 00:03:05.079 CC lib/scsi/scsi.o 00:03:05.079 CC lib/ftl/ftl_l2p.o 00:03:05.079 SYMLINK libspdk_ublk.so 00:03:05.079 CC lib/scsi/scsi_bdev.o 00:03:05.079 CC lib/nvmf/nvmf_rpc.o 00:03:05.337 CC lib/nvmf/transport.o 00:03:05.337 CC lib/ftl/ftl_l2p_flat.o 00:03:05.337 CC lib/nvmf/tcp.o 00:03:05.337 CC lib/scsi/scsi_pr.o 00:03:05.596 CC lib/scsi/scsi_rpc.o 00:03:05.596 CC lib/ftl/ftl_nv_cache.o 00:03:05.596 CC lib/scsi/task.o 00:03:05.596 CC lib/nvmf/stubs.o 00:03:05.596 CC lib/nvmf/mdns_server.o 00:03:05.854 CC lib/ftl/ftl_band.o 00:03:05.854 LIB libspdk_scsi.a 00:03:06.112 SO libspdk_scsi.so.9.0 00:03:06.112 CC lib/nvmf/rdma.o 00:03:06.112 SYMLINK libspdk_scsi.so 00:03:06.112 CC lib/ftl/ftl_band_ops.o 00:03:06.112 CC lib/nvmf/auth.o 00:03:06.112 CC lib/ftl/ftl_writer.o 00:03:06.112 CC lib/ftl/ftl_rq.o 00:03:06.370 CC lib/iscsi/conn.o 00:03:06.370 CC lib/vhost/vhost.o 00:03:06.370 CC lib/iscsi/init_grp.o 00:03:06.370 CC lib/ftl/ftl_reloc.o 00:03:06.370 CC lib/iscsi/iscsi.o 00:03:06.629 CC lib/iscsi/param.o 00:03:06.629 CC lib/vhost/vhost_rpc.o 00:03:06.888 CC lib/vhost/vhost_scsi.o 00:03:06.888 CC lib/vhost/vhost_blk.o 00:03:06.888 CC lib/iscsi/portal_grp.o 00:03:06.888 CC lib/iscsi/tgt_node.o 00:03:07.146 CC lib/ftl/ftl_l2p_cache.o 00:03:07.146 CC lib/ftl/ftl_p2l.o 00:03:07.146 CC lib/ftl/ftl_p2l_log.o 00:03:07.404 CC lib/vhost/rte_vhost_user.o 00:03:07.404 CC lib/ftl/mngt/ftl_mngt.o 00:03:07.404 CC lib/iscsi/iscsi_subsystem.o 00:03:07.747 CC lib/iscsi/iscsi_rpc.o 00:03:07.747 CC lib/iscsi/task.o 00:03:07.747 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:07.747 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:07.747 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:08.017 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:08.017 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:08.017 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:08.017 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:08.017 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:08.017 LIB libspdk_iscsi.a 00:03:08.017 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:08.017 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:08.017 SO libspdk_iscsi.so.8.0 00:03:08.274 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:08.274 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:08.274 CC lib/ftl/utils/ftl_conf.o 00:03:08.274 SYMLINK libspdk_iscsi.so 00:03:08.274 CC lib/ftl/utils/ftl_md.o 00:03:08.274 CC lib/ftl/utils/ftl_mempool.o 00:03:08.533 CC lib/ftl/utils/ftl_bitmap.o 00:03:08.533 CC lib/ftl/utils/ftl_property.o 00:03:08.533 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:08.533 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:08.533 LIB libspdk_nvmf.a 00:03:08.533 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:08.533 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:08.533 SO libspdk_nvmf.so.20.0 00:03:08.793 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:08.793 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:08.793 LIB libspdk_vhost.a 00:03:08.793 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:08.793 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:08.793 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:08.793 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:08.793 SO libspdk_vhost.so.8.0 00:03:08.793 SYMLINK libspdk_nvmf.so 00:03:08.793 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:08.793 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:08.793 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:08.793 CC lib/ftl/base/ftl_base_dev.o 00:03:08.793 CC lib/ftl/base/ftl_base_bdev.o 00:03:09.051 SYMLINK libspdk_vhost.so 00:03:09.051 CC lib/ftl/ftl_trace.o 00:03:09.309 LIB libspdk_ftl.a 00:03:09.567 SO libspdk_ftl.so.9.0 00:03:09.826 SYMLINK libspdk_ftl.so 00:03:10.393 CC module/env_dpdk/env_dpdk_rpc.o 00:03:10.393 CC module/sock/posix/posix.o 00:03:10.393 CC module/blob/bdev/blob_bdev.o 00:03:10.393 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:10.393 CC module/sock/uring/uring.o 00:03:10.393 CC module/fsdev/aio/fsdev_aio.o 00:03:10.393 CC module/scheduler/gscheduler/gscheduler.o 00:03:10.393 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:10.393 CC module/accel/error/accel_error.o 00:03:10.393 CC module/keyring/file/keyring.o 00:03:10.651 LIB libspdk_env_dpdk_rpc.a 00:03:10.651 SO libspdk_env_dpdk_rpc.so.6.0 00:03:10.651 LIB libspdk_scheduler_dynamic.a 00:03:10.651 SO libspdk_scheduler_dynamic.so.4.0 00:03:10.651 CC module/keyring/file/keyring_rpc.o 00:03:10.651 LIB libspdk_scheduler_gscheduler.a 00:03:10.651 LIB libspdk_blob_bdev.a 00:03:10.651 SYMLINK libspdk_env_dpdk_rpc.so 00:03:10.651 SO libspdk_scheduler_gscheduler.so.4.0 00:03:10.651 SO libspdk_blob_bdev.so.11.0 00:03:10.651 CC module/accel/error/accel_error_rpc.o 00:03:10.651 LIB libspdk_scheduler_dpdk_governor.a 00:03:10.651 SYMLINK libspdk_scheduler_dynamic.so 00:03:10.651 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:10.651 SYMLINK libspdk_scheduler_gscheduler.so 00:03:10.651 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:10.651 SYMLINK libspdk_blob_bdev.so 00:03:10.651 CC module/fsdev/aio/linux_aio_mgr.o 00:03:10.908 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:10.908 LIB libspdk_keyring_file.a 00:03:10.908 SO libspdk_keyring_file.so.2.0 00:03:10.908 CC module/keyring/linux/keyring.o 00:03:10.908 LIB libspdk_accel_error.a 00:03:10.908 CC module/keyring/linux/keyring_rpc.o 00:03:10.908 SYMLINK libspdk_keyring_file.so 00:03:10.908 SO libspdk_accel_error.so.2.0 00:03:10.908 CC module/accel/ioat/accel_ioat.o 00:03:11.166 SYMLINK libspdk_accel_error.so 00:03:11.166 LIB libspdk_sock_uring.a 00:03:11.166 CC module/accel/ioat/accel_ioat_rpc.o 00:03:11.166 SO libspdk_sock_uring.so.5.0 00:03:11.166 CC module/bdev/delay/vbdev_delay.o 00:03:11.166 LIB libspdk_keyring_linux.a 00:03:11.166 SO libspdk_keyring_linux.so.1.0 00:03:11.166 SYMLINK libspdk_sock_uring.so 00:03:11.166 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:11.166 CC module/accel/dsa/accel_dsa.o 00:03:11.166 LIB libspdk_accel_ioat.a 00:03:11.425 CC module/accel/iaa/accel_iaa.o 00:03:11.425 SYMLINK libspdk_keyring_linux.so 00:03:11.425 SO libspdk_accel_ioat.so.6.0 00:03:11.425 CC module/blobfs/bdev/blobfs_bdev.o 00:03:11.425 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:11.425 SYMLINK libspdk_accel_ioat.so 00:03:11.425 CC module/bdev/error/vbdev_error.o 00:03:11.425 CC module/accel/iaa/accel_iaa_rpc.o 00:03:11.425 CC module/accel/dsa/accel_dsa_rpc.o 00:03:11.683 LIB libspdk_fsdev_aio.a 00:03:11.683 LIB libspdk_bdev_delay.a 00:03:11.683 SO libspdk_fsdev_aio.so.1.0 00:03:11.683 SO libspdk_bdev_delay.so.6.0 00:03:11.683 LIB libspdk_blobfs_bdev.a 00:03:11.683 LIB libspdk_sock_posix.a 00:03:11.683 SO libspdk_blobfs_bdev.so.6.0 00:03:11.683 LIB libspdk_accel_iaa.a 00:03:11.683 SO libspdk_sock_posix.so.6.0 00:03:11.683 SYMLINK libspdk_fsdev_aio.so 00:03:11.683 SYMLINK libspdk_bdev_delay.so 00:03:11.683 CC module/bdev/gpt/gpt.o 00:03:11.683 SYMLINK libspdk_blobfs_bdev.so 00:03:11.683 SO libspdk_accel_iaa.so.3.0 00:03:11.683 CC module/bdev/gpt/vbdev_gpt.o 00:03:11.683 CC module/bdev/lvol/vbdev_lvol.o 00:03:11.989 LIB libspdk_accel_dsa.a 00:03:11.989 SYMLINK libspdk_sock_posix.so 00:03:11.989 SYMLINK libspdk_accel_iaa.so 00:03:11.989 SO libspdk_accel_dsa.so.5.0 00:03:11.989 CC module/bdev/error/vbdev_error_rpc.o 00:03:11.989 SYMLINK libspdk_accel_dsa.so 00:03:11.989 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:11.989 CC module/bdev/nvme/bdev_nvme.o 00:03:11.989 CC module/bdev/malloc/bdev_malloc.o 00:03:11.989 CC module/bdev/null/bdev_null.o 00:03:11.989 CC module/bdev/raid/bdev_raid.o 00:03:11.989 CC module/bdev/passthru/vbdev_passthru.o 00:03:12.247 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:12.247 LIB libspdk_bdev_error.a 00:03:12.247 SO libspdk_bdev_error.so.6.0 00:03:12.247 LIB libspdk_bdev_gpt.a 00:03:12.247 SO libspdk_bdev_gpt.so.6.0 00:03:12.247 SYMLINK libspdk_bdev_error.so 00:03:12.506 CC module/bdev/null/bdev_null_rpc.o 00:03:12.506 SYMLINK libspdk_bdev_gpt.so 00:03:12.506 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:12.506 LIB libspdk_bdev_passthru.a 00:03:12.506 SO libspdk_bdev_passthru.so.6.0 00:03:12.506 CC module/bdev/raid/bdev_raid_rpc.o 00:03:12.506 SYMLINK libspdk_bdev_passthru.so 00:03:12.506 CC module/bdev/split/vbdev_split.o 00:03:12.506 LIB libspdk_bdev_null.a 00:03:12.764 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:12.764 SO libspdk_bdev_null.so.6.0 00:03:12.764 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:12.764 LIB libspdk_bdev_lvol.a 00:03:12.764 SO libspdk_bdev_lvol.so.6.0 00:03:12.764 SYMLINK libspdk_bdev_null.so 00:03:12.764 CC module/bdev/nvme/nvme_rpc.o 00:03:12.764 CC module/bdev/uring/bdev_uring.o 00:03:12.764 CC module/bdev/split/vbdev_split_rpc.o 00:03:12.764 LIB libspdk_bdev_malloc.a 00:03:12.764 SYMLINK libspdk_bdev_lvol.so 00:03:12.764 CC module/bdev/nvme/bdev_mdns_client.o 00:03:13.024 SO libspdk_bdev_malloc.so.6.0 00:03:13.024 SYMLINK libspdk_bdev_malloc.so 00:03:13.024 CC module/bdev/nvme/vbdev_opal.o 00:03:13.024 CC module/bdev/uring/bdev_uring_rpc.o 00:03:13.024 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:13.024 CC module/bdev/raid/bdev_raid_sb.o 00:03:13.024 CC module/bdev/raid/raid0.o 00:03:13.283 LIB libspdk_bdev_split.a 00:03:13.283 LIB libspdk_bdev_zone_block.a 00:03:13.283 SO libspdk_bdev_split.so.6.0 00:03:13.283 SO libspdk_bdev_zone_block.so.6.0 00:03:13.283 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:13.283 LIB libspdk_bdev_uring.a 00:03:13.283 SYMLINK libspdk_bdev_split.so 00:03:13.283 SYMLINK libspdk_bdev_zone_block.so 00:03:13.283 SO libspdk_bdev_uring.so.6.0 00:03:13.541 CC module/bdev/raid/raid1.o 00:03:13.541 SYMLINK libspdk_bdev_uring.so 00:03:13.541 CC module/bdev/raid/concat.o 00:03:13.541 CC module/bdev/ftl/bdev_ftl.o 00:03:13.541 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:13.541 CC module/bdev/aio/bdev_aio.o 00:03:13.541 CC module/bdev/aio/bdev_aio_rpc.o 00:03:13.541 CC module/bdev/iscsi/bdev_iscsi.o 00:03:13.541 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:13.800 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:13.800 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:13.800 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:13.800 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:13.800 LIB libspdk_bdev_raid.a 00:03:13.800 SO libspdk_bdev_raid.so.6.0 00:03:14.058 LIB libspdk_bdev_ftl.a 00:03:14.058 SYMLINK libspdk_bdev_raid.so 00:03:14.058 LIB libspdk_bdev_aio.a 00:03:14.058 SO libspdk_bdev_ftl.so.6.0 00:03:14.058 SO libspdk_bdev_aio.so.6.0 00:03:14.058 LIB libspdk_bdev_iscsi.a 00:03:14.058 SYMLINK libspdk_bdev_ftl.so 00:03:14.058 SO libspdk_bdev_iscsi.so.6.0 00:03:14.058 SYMLINK libspdk_bdev_aio.so 00:03:14.317 SYMLINK libspdk_bdev_iscsi.so 00:03:14.317 LIB libspdk_bdev_virtio.a 00:03:14.317 SO libspdk_bdev_virtio.so.6.0 00:03:14.317 SYMLINK libspdk_bdev_virtio.so 00:03:15.718 LIB libspdk_bdev_nvme.a 00:03:15.718 SO libspdk_bdev_nvme.so.7.1 00:03:15.718 SYMLINK libspdk_bdev_nvme.so 00:03:16.286 CC module/event/subsystems/vmd/vmd.o 00:03:16.286 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:16.286 CC module/event/subsystems/sock/sock.o 00:03:16.286 CC module/event/subsystems/scheduler/scheduler.o 00:03:16.286 CC module/event/subsystems/keyring/keyring.o 00:03:16.286 CC module/event/subsystems/fsdev/fsdev.o 00:03:16.286 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:16.286 CC module/event/subsystems/iobuf/iobuf.o 00:03:16.286 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:16.286 LIB libspdk_event_keyring.a 00:03:16.286 LIB libspdk_event_scheduler.a 00:03:16.286 LIB libspdk_event_iobuf.a 00:03:16.286 LIB libspdk_event_vmd.a 00:03:16.286 LIB libspdk_event_vhost_blk.a 00:03:16.286 SO libspdk_event_keyring.so.1.0 00:03:16.286 SO libspdk_event_scheduler.so.4.0 00:03:16.286 SO libspdk_event_vmd.so.6.0 00:03:16.286 SO libspdk_event_vhost_blk.so.3.0 00:03:16.287 SO libspdk_event_iobuf.so.3.0 00:03:16.546 LIB libspdk_event_fsdev.a 00:03:16.546 LIB libspdk_event_sock.a 00:03:16.546 SYMLINK libspdk_event_keyring.so 00:03:16.546 SYMLINK libspdk_event_scheduler.so 00:03:16.546 SYMLINK libspdk_event_vhost_blk.so 00:03:16.546 SO libspdk_event_fsdev.so.1.0 00:03:16.546 SYMLINK libspdk_event_vmd.so 00:03:16.546 SO libspdk_event_sock.so.5.0 00:03:16.546 SYMLINK libspdk_event_iobuf.so 00:03:16.546 SYMLINK libspdk_event_sock.so 00:03:16.546 SYMLINK libspdk_event_fsdev.so 00:03:16.804 CC module/event/subsystems/accel/accel.o 00:03:16.804 LIB libspdk_event_accel.a 00:03:17.062 SO libspdk_event_accel.so.6.0 00:03:17.062 SYMLINK libspdk_event_accel.so 00:03:17.321 CC module/event/subsystems/bdev/bdev.o 00:03:17.580 LIB libspdk_event_bdev.a 00:03:17.580 SO libspdk_event_bdev.so.6.0 00:03:17.580 SYMLINK libspdk_event_bdev.so 00:03:17.839 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:17.839 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:17.839 CC module/event/subsystems/ublk/ublk.o 00:03:17.839 CC module/event/subsystems/nbd/nbd.o 00:03:17.839 CC module/event/subsystems/scsi/scsi.o 00:03:18.098 LIB libspdk_event_ublk.a 00:03:18.098 LIB libspdk_event_nbd.a 00:03:18.098 SO libspdk_event_ublk.so.3.0 00:03:18.098 SO libspdk_event_nbd.so.6.0 00:03:18.098 LIB libspdk_event_scsi.a 00:03:18.098 SO libspdk_event_scsi.so.6.0 00:03:18.098 SYMLINK libspdk_event_nbd.so 00:03:18.098 SYMLINK libspdk_event_ublk.so 00:03:18.098 SYMLINK libspdk_event_scsi.so 00:03:18.098 LIB libspdk_event_nvmf.a 00:03:18.098 SO libspdk_event_nvmf.so.6.0 00:03:18.359 SYMLINK libspdk_event_nvmf.so 00:03:18.359 CC module/event/subsystems/iscsi/iscsi.o 00:03:18.359 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:18.618 LIB libspdk_event_vhost_scsi.a 00:03:18.618 SO libspdk_event_vhost_scsi.so.3.0 00:03:18.618 LIB libspdk_event_iscsi.a 00:03:18.618 SO libspdk_event_iscsi.so.6.0 00:03:18.618 SYMLINK libspdk_event_vhost_scsi.so 00:03:18.618 SYMLINK libspdk_event_iscsi.so 00:03:18.877 SO libspdk.so.6.0 00:03:18.877 SYMLINK libspdk.so 00:03:19.136 CC app/spdk_nvme_identify/identify.o 00:03:19.136 CC app/trace_record/trace_record.o 00:03:19.136 CC app/spdk_lspci/spdk_lspci.o 00:03:19.136 CXX app/trace/trace.o 00:03:19.136 CC app/spdk_nvme_perf/perf.o 00:03:19.136 CC app/spdk_tgt/spdk_tgt.o 00:03:19.136 CC app/iscsi_tgt/iscsi_tgt.o 00:03:19.136 CC app/nvmf_tgt/nvmf_main.o 00:03:19.395 CC test/thread/poller_perf/poller_perf.o 00:03:19.395 LINK spdk_lspci 00:03:19.395 CC examples/util/zipf/zipf.o 00:03:19.395 LINK nvmf_tgt 00:03:19.395 LINK iscsi_tgt 00:03:19.395 LINK poller_perf 00:03:19.395 LINK spdk_trace_record 00:03:19.395 LINK spdk_tgt 00:03:19.653 LINK zipf 00:03:19.653 LINK spdk_trace 00:03:19.653 CC examples/ioat/perf/perf.o 00:03:19.653 CC examples/ioat/verify/verify.o 00:03:19.653 CC app/spdk_nvme_discover/discovery_aer.o 00:03:19.912 CC app/spdk_top/spdk_top.o 00:03:19.912 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:19.912 LINK ioat_perf 00:03:19.912 CC test/dma/test_dma/test_dma.o 00:03:19.912 CC app/spdk_dd/spdk_dd.o 00:03:19.912 LINK verify 00:03:19.912 LINK spdk_nvme_discover 00:03:19.912 CC app/fio/nvme/fio_plugin.o 00:03:20.169 LINK interrupt_tgt 00:03:20.170 LINK spdk_nvme_perf 00:03:20.170 LINK spdk_nvme_identify 00:03:20.170 CC app/fio/bdev/fio_plugin.o 00:03:20.170 CC app/vhost/vhost.o 00:03:20.428 LINK spdk_dd 00:03:20.428 CC examples/thread/thread/thread_ex.o 00:03:20.428 LINK test_dma 00:03:20.428 CC examples/sock/hello_world/hello_sock.o 00:03:20.428 LINK vhost 00:03:20.428 CC examples/idxd/perf/perf.o 00:03:20.428 CC examples/vmd/lsvmd/lsvmd.o 00:03:20.686 LINK spdk_nvme 00:03:20.686 CC examples/vmd/led/led.o 00:03:20.686 LINK lsvmd 00:03:20.686 LINK thread 00:03:20.686 LINK spdk_top 00:03:20.686 LINK hello_sock 00:03:20.686 LINK spdk_bdev 00:03:20.944 LINK led 00:03:20.944 TEST_HEADER include/spdk/accel.h 00:03:20.944 TEST_HEADER include/spdk/accel_module.h 00:03:20.944 TEST_HEADER include/spdk/assert.h 00:03:20.944 TEST_HEADER include/spdk/barrier.h 00:03:20.944 TEST_HEADER include/spdk/base64.h 00:03:20.944 TEST_HEADER include/spdk/bdev.h 00:03:20.944 TEST_HEADER include/spdk/bdev_module.h 00:03:20.944 TEST_HEADER include/spdk/bdev_zone.h 00:03:20.944 TEST_HEADER include/spdk/bit_array.h 00:03:20.944 TEST_HEADER include/spdk/bit_pool.h 00:03:20.944 TEST_HEADER include/spdk/blob_bdev.h 00:03:20.944 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:20.944 TEST_HEADER include/spdk/blobfs.h 00:03:20.944 TEST_HEADER include/spdk/blob.h 00:03:20.944 TEST_HEADER include/spdk/conf.h 00:03:20.944 TEST_HEADER include/spdk/config.h 00:03:20.944 TEST_HEADER include/spdk/cpuset.h 00:03:20.944 LINK idxd_perf 00:03:20.944 TEST_HEADER include/spdk/crc16.h 00:03:20.944 TEST_HEADER include/spdk/crc32.h 00:03:20.944 TEST_HEADER include/spdk/crc64.h 00:03:20.944 TEST_HEADER include/spdk/dif.h 00:03:20.944 TEST_HEADER include/spdk/dma.h 00:03:20.944 TEST_HEADER include/spdk/endian.h 00:03:20.944 TEST_HEADER include/spdk/env_dpdk.h 00:03:20.944 TEST_HEADER include/spdk/env.h 00:03:20.944 TEST_HEADER include/spdk/event.h 00:03:20.944 TEST_HEADER include/spdk/fd_group.h 00:03:20.944 TEST_HEADER include/spdk/fd.h 00:03:20.944 TEST_HEADER include/spdk/file.h 00:03:20.944 TEST_HEADER include/spdk/fsdev.h 00:03:20.944 TEST_HEADER include/spdk/fsdev_module.h 00:03:20.944 TEST_HEADER include/spdk/ftl.h 00:03:20.944 CC test/app/bdev_svc/bdev_svc.o 00:03:20.944 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:20.944 TEST_HEADER include/spdk/gpt_spec.h 00:03:20.944 TEST_HEADER include/spdk/hexlify.h 00:03:20.944 TEST_HEADER include/spdk/histogram_data.h 00:03:20.944 TEST_HEADER include/spdk/idxd.h 00:03:20.944 TEST_HEADER include/spdk/idxd_spec.h 00:03:20.944 TEST_HEADER include/spdk/init.h 00:03:20.944 TEST_HEADER include/spdk/ioat.h 00:03:20.944 TEST_HEADER include/spdk/ioat_spec.h 00:03:20.944 TEST_HEADER include/spdk/iscsi_spec.h 00:03:20.944 TEST_HEADER include/spdk/json.h 00:03:20.944 TEST_HEADER include/spdk/jsonrpc.h 00:03:20.944 TEST_HEADER include/spdk/keyring.h 00:03:20.944 TEST_HEADER include/spdk/keyring_module.h 00:03:20.944 TEST_HEADER include/spdk/likely.h 00:03:20.944 CC test/blobfs/mkfs/mkfs.o 00:03:20.944 TEST_HEADER include/spdk/log.h 00:03:20.944 TEST_HEADER include/spdk/lvol.h 00:03:20.944 TEST_HEADER include/spdk/md5.h 00:03:20.944 TEST_HEADER include/spdk/memory.h 00:03:20.944 TEST_HEADER include/spdk/mmio.h 00:03:20.944 TEST_HEADER include/spdk/nbd.h 00:03:20.944 TEST_HEADER include/spdk/net.h 00:03:20.944 TEST_HEADER include/spdk/notify.h 00:03:20.944 TEST_HEADER include/spdk/nvme.h 00:03:20.944 TEST_HEADER include/spdk/nvme_intel.h 00:03:20.944 CC test/app/histogram_perf/histogram_perf.o 00:03:20.944 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:20.944 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:20.944 TEST_HEADER include/spdk/nvme_spec.h 00:03:20.944 TEST_HEADER include/spdk/nvme_zns.h 00:03:20.944 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:20.944 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:20.944 TEST_HEADER include/spdk/nvmf.h 00:03:20.944 TEST_HEADER include/spdk/nvmf_spec.h 00:03:20.944 TEST_HEADER include/spdk/nvmf_transport.h 00:03:20.944 TEST_HEADER include/spdk/opal.h 00:03:20.944 TEST_HEADER include/spdk/opal_spec.h 00:03:20.944 TEST_HEADER include/spdk/pci_ids.h 00:03:20.944 CC test/app/jsoncat/jsoncat.o 00:03:20.944 TEST_HEADER include/spdk/pipe.h 00:03:20.944 TEST_HEADER include/spdk/queue.h 00:03:20.944 TEST_HEADER include/spdk/reduce.h 00:03:20.944 TEST_HEADER include/spdk/rpc.h 00:03:20.944 TEST_HEADER include/spdk/scheduler.h 00:03:20.944 TEST_HEADER include/spdk/scsi.h 00:03:21.203 TEST_HEADER include/spdk/scsi_spec.h 00:03:21.203 TEST_HEADER include/spdk/sock.h 00:03:21.203 TEST_HEADER include/spdk/stdinc.h 00:03:21.203 TEST_HEADER include/spdk/string.h 00:03:21.203 TEST_HEADER include/spdk/thread.h 00:03:21.203 TEST_HEADER include/spdk/trace.h 00:03:21.203 CC test/app/stub/stub.o 00:03:21.203 TEST_HEADER include/spdk/trace_parser.h 00:03:21.203 TEST_HEADER include/spdk/tree.h 00:03:21.203 TEST_HEADER include/spdk/ublk.h 00:03:21.203 TEST_HEADER include/spdk/util.h 00:03:21.203 TEST_HEADER include/spdk/uuid.h 00:03:21.203 TEST_HEADER include/spdk/version.h 00:03:21.203 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:21.203 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:21.203 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:21.203 TEST_HEADER include/spdk/vhost.h 00:03:21.203 TEST_HEADER include/spdk/vmd.h 00:03:21.203 TEST_HEADER include/spdk/xor.h 00:03:21.203 TEST_HEADER include/spdk/zipf.h 00:03:21.203 CXX test/cpp_headers/accel.o 00:03:21.203 LINK bdev_svc 00:03:21.203 CC test/env/vtophys/vtophys.o 00:03:21.203 LINK histogram_perf 00:03:21.203 LINK mkfs 00:03:21.203 LINK jsoncat 00:03:21.203 CC test/env/mem_callbacks/mem_callbacks.o 00:03:21.203 CC examples/accel/perf/accel_perf.o 00:03:21.203 CXX test/cpp_headers/accel_module.o 00:03:21.462 LINK stub 00:03:21.462 LINK vtophys 00:03:21.462 CC test/env/memory/memory_ut.o 00:03:21.462 CXX test/cpp_headers/assert.o 00:03:21.462 CXX test/cpp_headers/barrier.o 00:03:21.462 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:21.462 CC test/env/pci/pci_ut.o 00:03:21.721 LINK nvme_fuzz 00:03:21.721 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:21.721 CC examples/blob/hello_world/hello_blob.o 00:03:21.721 CXX test/cpp_headers/base64.o 00:03:21.721 LINK env_dpdk_post_init 00:03:21.721 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:21.721 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:21.721 LINK accel_perf 00:03:21.980 CXX test/cpp_headers/bdev.o 00:03:21.980 LINK hello_blob 00:03:21.980 LINK mem_callbacks 00:03:21.980 LINK pci_ut 00:03:21.980 CXX test/cpp_headers/bdev_module.o 00:03:22.239 CC examples/nvme/hello_world/hello_world.o 00:03:22.239 CXX test/cpp_headers/bdev_zone.o 00:03:22.239 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:22.239 CC test/event/event_perf/event_perf.o 00:03:22.239 LINK vhost_fuzz 00:03:22.239 CC examples/blob/cli/blobcli.o 00:03:22.239 CXX test/cpp_headers/bit_array.o 00:03:22.499 LINK hello_world 00:03:22.499 LINK event_perf 00:03:22.499 CXX test/cpp_headers/bit_pool.o 00:03:22.499 LINK hello_fsdev 00:03:22.499 CC examples/bdev/hello_world/hello_bdev.o 00:03:22.499 CC test/lvol/esnap/esnap.o 00:03:22.759 CXX test/cpp_headers/blob_bdev.o 00:03:22.759 CC examples/nvme/reconnect/reconnect.o 00:03:22.759 CC test/event/reactor/reactor.o 00:03:22.759 CC test/nvme/aer/aer.o 00:03:22.759 CC test/rpc_client/rpc_client_test.o 00:03:22.759 LINK blobcli 00:03:22.759 CXX test/cpp_headers/blobfs_bdev.o 00:03:22.759 LINK reactor 00:03:22.759 LINK hello_bdev 00:03:22.759 LINK memory_ut 00:03:23.017 LINK rpc_client_test 00:03:23.017 LINK reconnect 00:03:23.017 LINK aer 00:03:23.017 CC test/event/reactor_perf/reactor_perf.o 00:03:23.017 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:23.017 CXX test/cpp_headers/blobfs.o 00:03:23.276 CXX test/cpp_headers/blob.o 00:03:23.276 CC examples/bdev/bdevperf/bdevperf.o 00:03:23.276 CC test/nvme/reset/reset.o 00:03:23.276 CXX test/cpp_headers/conf.o 00:03:23.276 LINK reactor_perf 00:03:23.276 CC test/accel/dif/dif.o 00:03:23.276 CXX test/cpp_headers/config.o 00:03:23.276 CXX test/cpp_headers/cpuset.o 00:03:23.534 LINK iscsi_fuzz 00:03:23.534 CXX test/cpp_headers/crc16.o 00:03:23.534 CC test/event/app_repeat/app_repeat.o 00:03:23.534 CC test/nvme/sgl/sgl.o 00:03:23.534 LINK reset 00:03:23.534 LINK nvme_manage 00:03:23.792 CXX test/cpp_headers/crc32.o 00:03:23.792 LINK app_repeat 00:03:23.792 CC examples/nvme/arbitration/arbitration.o 00:03:23.792 CXX test/cpp_headers/crc64.o 00:03:23.792 CC test/nvme/e2edp/nvme_dp.o 00:03:23.792 CXX test/cpp_headers/dif.o 00:03:23.792 LINK sgl 00:03:23.792 CXX test/cpp_headers/dma.o 00:03:24.050 CC test/nvme/overhead/overhead.o 00:03:24.050 CC test/event/scheduler/scheduler.o 00:03:24.050 CXX test/cpp_headers/endian.o 00:03:24.050 LINK dif 00:03:24.050 LINK arbitration 00:03:24.050 LINK nvme_dp 00:03:24.050 CXX test/cpp_headers/env_dpdk.o 00:03:24.050 LINK bdevperf 00:03:24.050 CC test/nvme/err_injection/err_injection.o 00:03:24.308 LINK scheduler 00:03:24.308 CXX test/cpp_headers/env.o 00:03:24.308 CC examples/nvme/hotplug/hotplug.o 00:03:24.308 LINK overhead 00:03:24.308 CXX test/cpp_headers/event.o 00:03:24.308 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:24.308 CC test/nvme/startup/startup.o 00:03:24.308 LINK err_injection 00:03:24.308 CC test/nvme/reserve/reserve.o 00:03:24.568 CXX test/cpp_headers/fd_group.o 00:03:24.568 CXX test/cpp_headers/fd.o 00:03:24.568 CXX test/cpp_headers/file.o 00:03:24.568 LINK startup 00:03:24.568 LINK cmb_copy 00:03:24.568 LINK hotplug 00:03:24.568 CC test/nvme/simple_copy/simple_copy.o 00:03:24.568 CC test/nvme/connect_stress/connect_stress.o 00:03:24.568 LINK reserve 00:03:24.827 CXX test/cpp_headers/fsdev.o 00:03:24.827 LINK connect_stress 00:03:24.827 CC test/nvme/boot_partition/boot_partition.o 00:03:24.827 CC test/nvme/compliance/nvme_compliance.o 00:03:24.827 LINK simple_copy 00:03:24.827 CC examples/nvme/abort/abort.o 00:03:24.827 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:24.827 CXX test/cpp_headers/fsdev_module.o 00:03:24.827 CC test/nvme/fused_ordering/fused_ordering.o 00:03:24.827 CC test/bdev/bdevio/bdevio.o 00:03:25.086 LINK boot_partition 00:03:25.086 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:25.086 LINK pmr_persistence 00:03:25.086 CC test/nvme/fdp/fdp.o 00:03:25.086 CXX test/cpp_headers/ftl.o 00:03:25.086 CXX test/cpp_headers/fuse_dispatcher.o 00:03:25.086 LINK fused_ordering 00:03:25.086 LINK nvme_compliance 00:03:25.086 LINK abort 00:03:25.345 CXX test/cpp_headers/gpt_spec.o 00:03:25.345 LINK doorbell_aers 00:03:25.345 CXX test/cpp_headers/hexlify.o 00:03:25.345 CXX test/cpp_headers/histogram_data.o 00:03:25.345 LINK bdevio 00:03:25.345 CXX test/cpp_headers/idxd.o 00:03:25.345 CXX test/cpp_headers/idxd_spec.o 00:03:25.345 CXX test/cpp_headers/init.o 00:03:25.345 CC test/nvme/cuse/cuse.o 00:03:25.345 LINK fdp 00:03:25.605 CXX test/cpp_headers/ioat.o 00:03:25.605 CXX test/cpp_headers/ioat_spec.o 00:03:25.605 CXX test/cpp_headers/iscsi_spec.o 00:03:25.605 CXX test/cpp_headers/json.o 00:03:25.605 CXX test/cpp_headers/jsonrpc.o 00:03:25.605 CXX test/cpp_headers/keyring.o 00:03:25.605 CC examples/nvmf/nvmf/nvmf.o 00:03:25.605 CXX test/cpp_headers/keyring_module.o 00:03:25.605 CXX test/cpp_headers/likely.o 00:03:25.605 CXX test/cpp_headers/log.o 00:03:25.605 CXX test/cpp_headers/lvol.o 00:03:25.605 CXX test/cpp_headers/md5.o 00:03:25.864 CXX test/cpp_headers/memory.o 00:03:25.864 CXX test/cpp_headers/mmio.o 00:03:25.864 CXX test/cpp_headers/nbd.o 00:03:25.864 CXX test/cpp_headers/net.o 00:03:25.864 CXX test/cpp_headers/notify.o 00:03:25.864 CXX test/cpp_headers/nvme.o 00:03:25.864 CXX test/cpp_headers/nvme_intel.o 00:03:25.864 CXX test/cpp_headers/nvme_ocssd.o 00:03:25.864 LINK nvmf 00:03:25.864 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:25.864 CXX test/cpp_headers/nvme_spec.o 00:03:25.864 CXX test/cpp_headers/nvme_zns.o 00:03:26.122 CXX test/cpp_headers/nvmf_cmd.o 00:03:26.122 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:26.122 CXX test/cpp_headers/nvmf.o 00:03:26.122 CXX test/cpp_headers/nvmf_spec.o 00:03:26.122 CXX test/cpp_headers/nvmf_transport.o 00:03:26.122 CXX test/cpp_headers/opal.o 00:03:26.122 CXX test/cpp_headers/opal_spec.o 00:03:26.122 CXX test/cpp_headers/pci_ids.o 00:03:26.122 CXX test/cpp_headers/pipe.o 00:03:26.380 CXX test/cpp_headers/queue.o 00:03:26.380 CXX test/cpp_headers/reduce.o 00:03:26.380 CXX test/cpp_headers/rpc.o 00:03:26.380 CXX test/cpp_headers/scheduler.o 00:03:26.380 CXX test/cpp_headers/scsi.o 00:03:26.381 CXX test/cpp_headers/scsi_spec.o 00:03:26.381 CXX test/cpp_headers/sock.o 00:03:26.381 CXX test/cpp_headers/stdinc.o 00:03:26.381 CXX test/cpp_headers/string.o 00:03:26.381 CXX test/cpp_headers/thread.o 00:03:26.381 CXX test/cpp_headers/trace.o 00:03:26.381 CXX test/cpp_headers/trace_parser.o 00:03:26.639 CXX test/cpp_headers/tree.o 00:03:26.639 CXX test/cpp_headers/ublk.o 00:03:26.639 CXX test/cpp_headers/util.o 00:03:26.639 CXX test/cpp_headers/uuid.o 00:03:26.639 CXX test/cpp_headers/version.o 00:03:26.639 CXX test/cpp_headers/vfio_user_pci.o 00:03:26.639 CXX test/cpp_headers/vfio_user_spec.o 00:03:26.639 CXX test/cpp_headers/vhost.o 00:03:26.639 CXX test/cpp_headers/vmd.o 00:03:26.639 CXX test/cpp_headers/xor.o 00:03:26.639 CXX test/cpp_headers/zipf.o 00:03:26.897 LINK cuse 00:03:28.273 LINK esnap 00:03:28.273 00:03:28.273 real 1m33.202s 00:03:28.273 user 8m38.046s 00:03:28.273 sys 1m43.244s 00:03:28.273 16:48:46 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:28.273 ************************************ 00:03:28.273 END TEST make 00:03:28.273 16:48:46 make -- common/autotest_common.sh@10 -- $ set +x 00:03:28.273 ************************************ 00:03:28.531 16:48:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:28.531 16:48:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:28.531 16:48:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:28.531 16:48:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.531 16:48:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:28.531 16:48:46 -- pm/common@44 -- $ pid=5248 00:03:28.531 16:48:46 -- pm/common@50 -- $ kill -TERM 5248 00:03:28.531 16:48:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.531 16:48:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:28.531 16:48:46 -- pm/common@44 -- $ pid=5250 00:03:28.531 16:48:46 -- pm/common@50 -- $ kill -TERM 5250 00:03:28.531 16:48:46 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:28.531 16:48:46 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:28.531 16:48:46 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:28.531 16:48:46 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:28.531 16:48:46 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:28.531 16:48:47 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:28.531 16:48:47 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:28.531 16:48:47 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:28.531 16:48:47 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:28.531 16:48:47 -- scripts/common.sh@336 -- # IFS=.-: 00:03:28.531 16:48:47 -- scripts/common.sh@336 -- # read -ra ver1 00:03:28.531 16:48:47 -- scripts/common.sh@337 -- # IFS=.-: 00:03:28.531 16:48:47 -- scripts/common.sh@337 -- # read -ra ver2 00:03:28.531 16:48:47 -- scripts/common.sh@338 -- # local 'op=<' 00:03:28.531 16:48:47 -- scripts/common.sh@340 -- # ver1_l=2 00:03:28.531 16:48:47 -- scripts/common.sh@341 -- # ver2_l=1 00:03:28.532 16:48:47 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:28.532 16:48:47 -- scripts/common.sh@344 -- # case "$op" in 00:03:28.532 16:48:47 -- scripts/common.sh@345 -- # : 1 00:03:28.532 16:48:47 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:28.532 16:48:47 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:28.532 16:48:47 -- scripts/common.sh@365 -- # decimal 1 00:03:28.532 16:48:47 -- scripts/common.sh@353 -- # local d=1 00:03:28.532 16:48:47 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:28.532 16:48:47 -- scripts/common.sh@355 -- # echo 1 00:03:28.532 16:48:47 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:28.532 16:48:47 -- scripts/common.sh@366 -- # decimal 2 00:03:28.532 16:48:47 -- scripts/common.sh@353 -- # local d=2 00:03:28.532 16:48:47 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:28.532 16:48:47 -- scripts/common.sh@355 -- # echo 2 00:03:28.532 16:48:47 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:28.532 16:48:47 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:28.532 16:48:47 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:28.532 16:48:47 -- scripts/common.sh@368 -- # return 0 00:03:28.532 16:48:47 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:28.532 16:48:47 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:28.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.532 --rc genhtml_branch_coverage=1 00:03:28.532 --rc genhtml_function_coverage=1 00:03:28.532 --rc genhtml_legend=1 00:03:28.532 --rc geninfo_all_blocks=1 00:03:28.532 --rc geninfo_unexecuted_blocks=1 00:03:28.532 00:03:28.532 ' 00:03:28.532 16:48:47 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:28.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.532 --rc genhtml_branch_coverage=1 00:03:28.532 --rc genhtml_function_coverage=1 00:03:28.532 --rc genhtml_legend=1 00:03:28.532 --rc geninfo_all_blocks=1 00:03:28.532 --rc geninfo_unexecuted_blocks=1 00:03:28.532 00:03:28.532 ' 00:03:28.532 16:48:47 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:28.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.532 --rc genhtml_branch_coverage=1 00:03:28.532 --rc genhtml_function_coverage=1 00:03:28.532 --rc genhtml_legend=1 00:03:28.532 --rc geninfo_all_blocks=1 00:03:28.532 --rc geninfo_unexecuted_blocks=1 00:03:28.532 00:03:28.532 ' 00:03:28.532 16:48:47 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:28.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.532 --rc genhtml_branch_coverage=1 00:03:28.532 --rc genhtml_function_coverage=1 00:03:28.532 --rc genhtml_legend=1 00:03:28.532 --rc geninfo_all_blocks=1 00:03:28.532 --rc geninfo_unexecuted_blocks=1 00:03:28.532 00:03:28.532 ' 00:03:28.532 16:48:47 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:28.532 16:48:47 -- nvmf/common.sh@7 -- # uname -s 00:03:28.532 16:48:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:28.532 16:48:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:28.532 16:48:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:28.532 16:48:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:28.532 16:48:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:28.532 16:48:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:28.532 16:48:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:28.532 16:48:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:28.532 16:48:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:28.532 16:48:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:28.794 16:48:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:03:28.794 16:48:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:03:28.794 16:48:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:28.794 16:48:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:28.794 16:48:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:28.794 16:48:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:28.794 16:48:47 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:28.794 16:48:47 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:28.794 16:48:47 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:28.794 16:48:47 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:28.794 16:48:47 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:28.794 16:48:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.794 16:48:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.794 16:48:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.794 16:48:47 -- paths/export.sh@5 -- # export PATH 00:03:28.794 16:48:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.794 16:48:47 -- nvmf/common.sh@51 -- # : 0 00:03:28.794 16:48:47 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:28.794 16:48:47 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:28.794 16:48:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:28.794 16:48:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:28.794 16:48:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:28.794 16:48:47 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:28.794 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:28.794 16:48:47 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:28.794 16:48:47 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:28.794 16:48:47 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:28.794 16:48:47 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:28.794 16:48:47 -- spdk/autotest.sh@32 -- # uname -s 00:03:28.794 16:48:47 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:28.794 16:48:47 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:28.794 16:48:47 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:28.794 16:48:47 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:28.794 16:48:47 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:28.794 16:48:47 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:28.794 16:48:47 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:28.794 16:48:47 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:28.794 16:48:47 -- spdk/autotest.sh@48 -- # udevadm_pid=54370 00:03:28.794 16:48:47 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:28.794 16:48:47 -- pm/common@17 -- # local monitor 00:03:28.794 16:48:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.794 16:48:47 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:28.794 16:48:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.794 16:48:47 -- pm/common@25 -- # sleep 1 00:03:28.794 16:48:47 -- pm/common@21 -- # date +%s 00:03:28.794 16:48:47 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732121327 00:03:28.794 16:48:47 -- pm/common@21 -- # date +%s 00:03:28.794 16:48:47 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732121327 00:03:28.794 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732121327_collect-vmstat.pm.log 00:03:28.794 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732121327_collect-cpu-load.pm.log 00:03:29.734 16:48:48 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:29.734 16:48:48 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:29.734 16:48:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:29.734 16:48:48 -- common/autotest_common.sh@10 -- # set +x 00:03:29.734 16:48:48 -- spdk/autotest.sh@59 -- # create_test_list 00:03:29.734 16:48:48 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:29.734 16:48:48 -- common/autotest_common.sh@10 -- # set +x 00:03:29.734 16:48:48 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:29.734 16:48:48 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:29.734 16:48:48 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:29.734 16:48:48 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:29.734 16:48:48 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:29.734 16:48:48 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:29.734 16:48:48 -- common/autotest_common.sh@1457 -- # uname 00:03:29.734 16:48:48 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:29.734 16:48:48 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:29.734 16:48:48 -- common/autotest_common.sh@1477 -- # uname 00:03:29.734 16:48:48 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:29.734 16:48:48 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:29.735 16:48:48 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:29.994 lcov: LCOV version 1.15 00:03:29.994 16:48:48 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:48.084 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:03.010 16:49:21 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:03.010 16:49:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.010 16:49:21 -- common/autotest_common.sh@10 -- # set +x 00:04:03.010 16:49:21 -- spdk/autotest.sh@78 -- # rm -f 00:04:03.010 16:49:21 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:03.268 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.526 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:03.526 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:03.526 16:49:21 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:03.526 16:49:21 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:03.526 16:49:21 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:03.526 16:49:21 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:03.526 16:49:21 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:03.526 16:49:21 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:03.526 16:49:21 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:03.526 16:49:21 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:03.526 16:49:21 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:03.526 16:49:21 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:03.526 16:49:21 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:03.526 16:49:21 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:03.526 16:49:21 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:03.526 16:49:21 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:03.526 16:49:21 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:03.526 16:49:21 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:03.526 16:49:21 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:03.526 16:49:21 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:03.526 16:49:21 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:03.526 16:49:21 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:03.526 16:49:21 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:03.526 16:49:21 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:03.526 16:49:21 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:03.526 16:49:21 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:03.526 16:49:21 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:03.526 16:49:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.526 16:49:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.526 16:49:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:03.526 16:49:21 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:03.527 16:49:21 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:03.527 No valid GPT data, bailing 00:04:03.527 16:49:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:03.527 16:49:21 -- scripts/common.sh@394 -- # pt= 00:04:03.527 16:49:21 -- scripts/common.sh@395 -- # return 1 00:04:03.527 16:49:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:03.527 1+0 records in 00:04:03.527 1+0 records out 00:04:03.527 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435878 s, 241 MB/s 00:04:03.527 16:49:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.527 16:49:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.527 16:49:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:03.527 16:49:21 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:03.527 16:49:21 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:03.527 No valid GPT data, bailing 00:04:03.527 16:49:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:03.527 16:49:22 -- scripts/common.sh@394 -- # pt= 00:04:03.527 16:49:22 -- scripts/common.sh@395 -- # return 1 00:04:03.527 16:49:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:03.527 1+0 records in 00:04:03.527 1+0 records out 00:04:03.527 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00489772 s, 214 MB/s 00:04:03.527 16:49:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.527 16:49:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.527 16:49:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:03.527 16:49:22 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:03.527 16:49:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:03.785 No valid GPT data, bailing 00:04:03.785 16:49:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:03.785 16:49:22 -- scripts/common.sh@394 -- # pt= 00:04:03.785 16:49:22 -- scripts/common.sh@395 -- # return 1 00:04:03.785 16:49:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:03.785 1+0 records in 00:04:03.785 1+0 records out 00:04:03.785 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00523006 s, 200 MB/s 00:04:03.785 16:49:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.785 16:49:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.785 16:49:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:03.785 16:49:22 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:03.785 16:49:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:03.785 No valid GPT data, bailing 00:04:03.785 16:49:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:03.785 16:49:22 -- scripts/common.sh@394 -- # pt= 00:04:03.785 16:49:22 -- scripts/common.sh@395 -- # return 1 00:04:03.785 16:49:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:03.785 1+0 records in 00:04:03.785 1+0 records out 00:04:03.785 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00482296 s, 217 MB/s 00:04:03.785 16:49:22 -- spdk/autotest.sh@105 -- # sync 00:04:03.785 16:49:22 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:03.785 16:49:22 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:03.785 16:49:22 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:05.683 16:49:24 -- spdk/autotest.sh@111 -- # uname -s 00:04:05.683 16:49:24 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:05.683 16:49:24 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:05.683 16:49:24 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:06.619 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.619 Hugepages 00:04:06.619 node hugesize free / total 00:04:06.619 node0 1048576kB 0 / 0 00:04:06.619 node0 2048kB 0 / 0 00:04:06.619 00:04:06.619 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:06.619 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:06.619 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:06.619 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:06.619 16:49:25 -- spdk/autotest.sh@117 -- # uname -s 00:04:06.619 16:49:25 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:06.619 16:49:25 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:06.619 16:49:25 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:07.188 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.446 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.446 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.446 16:49:25 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:08.821 16:49:26 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:08.821 16:49:26 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:08.821 16:49:26 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:08.821 16:49:26 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:08.821 16:49:26 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:08.821 16:49:26 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:08.821 16:49:26 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:08.821 16:49:26 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:08.821 16:49:26 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:08.821 16:49:27 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:08.821 16:49:27 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:08.821 16:49:27 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:08.821 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.080 Waiting for block devices as requested 00:04:09.080 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:09.080 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:09.080 16:49:27 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:09.080 16:49:27 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:09.080 16:49:27 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:09.080 16:49:27 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:09.080 16:49:27 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:09.080 16:49:27 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:09.080 16:49:27 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:09.338 16:49:27 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:09.338 16:49:27 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:09.338 16:49:27 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:09.338 16:49:27 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:09.338 16:49:27 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:09.338 16:49:27 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:09.338 16:49:27 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:09.338 16:49:27 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:09.338 16:49:27 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:09.338 16:49:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:09.338 16:49:27 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:09.338 16:49:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:09.338 16:49:27 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:09.338 16:49:27 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:09.338 16:49:27 -- common/autotest_common.sh@1543 -- # continue 00:04:09.338 16:49:27 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:09.338 16:49:27 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:09.338 16:49:27 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:09.338 16:49:27 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:09.338 16:49:27 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:09.338 16:49:27 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:09.338 16:49:27 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:09.338 16:49:27 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:09.338 16:49:27 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:09.338 16:49:27 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:09.338 16:49:27 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:09.338 16:49:27 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:09.338 16:49:27 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:09.338 16:49:27 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:09.338 16:49:27 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:09.338 16:49:27 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:09.338 16:49:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:09.338 16:49:27 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:09.338 16:49:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:09.338 16:49:27 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:09.338 16:49:27 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:09.338 16:49:27 -- common/autotest_common.sh@1543 -- # continue 00:04:09.338 16:49:27 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:09.338 16:49:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:09.338 16:49:27 -- common/autotest_common.sh@10 -- # set +x 00:04:09.338 16:49:27 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:09.338 16:49:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:09.338 16:49:27 -- common/autotest_common.sh@10 -- # set +x 00:04:09.338 16:49:27 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:09.904 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.904 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.163 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.163 16:49:28 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:10.163 16:49:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:10.163 16:49:28 -- common/autotest_common.sh@10 -- # set +x 00:04:10.163 16:49:28 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:10.164 16:49:28 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:10.164 16:49:28 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:10.164 16:49:28 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:10.164 16:49:28 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:10.164 16:49:28 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:10.164 16:49:28 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:10.164 16:49:28 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:10.164 16:49:28 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:10.164 16:49:28 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:10.164 16:49:28 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:10.164 16:49:28 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:10.164 16:49:28 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:10.164 16:49:28 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:10.164 16:49:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:10.164 16:49:28 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:10.164 16:49:28 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:10.164 16:49:28 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:10.164 16:49:28 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:10.164 16:49:28 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:10.164 16:49:28 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:10.164 16:49:28 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:10.164 16:49:28 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:10.164 16:49:28 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:10.164 16:49:28 -- common/autotest_common.sh@1572 -- # return 0 00:04:10.164 16:49:28 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:10.164 16:49:28 -- common/autotest_common.sh@1580 -- # return 0 00:04:10.164 16:49:28 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:10.164 16:49:28 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:10.164 16:49:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:10.164 16:49:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:10.164 16:49:28 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:10.164 16:49:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:10.164 16:49:28 -- common/autotest_common.sh@10 -- # set +x 00:04:10.164 16:49:28 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:10.164 16:49:28 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:10.164 16:49:28 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:10.164 16:49:28 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:10.164 16:49:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.164 16:49:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.164 16:49:28 -- common/autotest_common.sh@10 -- # set +x 00:04:10.164 ************************************ 00:04:10.164 START TEST env 00:04:10.164 ************************************ 00:04:10.164 16:49:28 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:10.423 * Looking for test storage... 00:04:10.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:10.423 16:49:28 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:10.423 16:49:28 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:10.423 16:49:28 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:10.423 16:49:28 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:10.423 16:49:28 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.423 16:49:28 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.423 16:49:28 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.423 16:49:28 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.423 16:49:28 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.423 16:49:28 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.423 16:49:28 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.423 16:49:28 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.423 16:49:28 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.423 16:49:28 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.423 16:49:28 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.423 16:49:28 env -- scripts/common.sh@344 -- # case "$op" in 00:04:10.423 16:49:28 env -- scripts/common.sh@345 -- # : 1 00:04:10.423 16:49:28 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.423 16:49:28 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.423 16:49:28 env -- scripts/common.sh@365 -- # decimal 1 00:04:10.423 16:49:28 env -- scripts/common.sh@353 -- # local d=1 00:04:10.424 16:49:28 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.424 16:49:28 env -- scripts/common.sh@355 -- # echo 1 00:04:10.424 16:49:28 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.424 16:49:28 env -- scripts/common.sh@366 -- # decimal 2 00:04:10.424 16:49:28 env -- scripts/common.sh@353 -- # local d=2 00:04:10.424 16:49:28 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.424 16:49:28 env -- scripts/common.sh@355 -- # echo 2 00:04:10.424 16:49:28 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.424 16:49:28 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.424 16:49:28 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.424 16:49:28 env -- scripts/common.sh@368 -- # return 0 00:04:10.424 16:49:28 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.424 16:49:28 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:10.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.424 --rc genhtml_branch_coverage=1 00:04:10.424 --rc genhtml_function_coverage=1 00:04:10.424 --rc genhtml_legend=1 00:04:10.424 --rc geninfo_all_blocks=1 00:04:10.424 --rc geninfo_unexecuted_blocks=1 00:04:10.424 00:04:10.424 ' 00:04:10.424 16:49:28 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:10.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.424 --rc genhtml_branch_coverage=1 00:04:10.424 --rc genhtml_function_coverage=1 00:04:10.424 --rc genhtml_legend=1 00:04:10.424 --rc geninfo_all_blocks=1 00:04:10.424 --rc geninfo_unexecuted_blocks=1 00:04:10.424 00:04:10.424 ' 00:04:10.424 16:49:28 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:10.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.424 --rc genhtml_branch_coverage=1 00:04:10.424 --rc genhtml_function_coverage=1 00:04:10.424 --rc genhtml_legend=1 00:04:10.424 --rc geninfo_all_blocks=1 00:04:10.424 --rc geninfo_unexecuted_blocks=1 00:04:10.424 00:04:10.424 ' 00:04:10.424 16:49:28 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:10.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.424 --rc genhtml_branch_coverage=1 00:04:10.424 --rc genhtml_function_coverage=1 00:04:10.424 --rc genhtml_legend=1 00:04:10.424 --rc geninfo_all_blocks=1 00:04:10.424 --rc geninfo_unexecuted_blocks=1 00:04:10.424 00:04:10.424 ' 00:04:10.424 16:49:28 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:10.424 16:49:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.424 16:49:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.424 16:49:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.424 ************************************ 00:04:10.424 START TEST env_memory 00:04:10.424 ************************************ 00:04:10.424 16:49:28 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:10.424 00:04:10.424 00:04:10.424 CUnit - A unit testing framework for C - Version 2.1-3 00:04:10.424 http://cunit.sourceforge.net/ 00:04:10.424 00:04:10.424 00:04:10.424 Suite: memory 00:04:10.424 Test: alloc and free memory map ...[2024-11-20 16:49:28.921775] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:10.424 passed 00:04:10.424 Test: mem map translation ...[2024-11-20 16:49:28.953013] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:10.424 [2024-11-20 16:49:28.953084] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:10.424 [2024-11-20 16:49:28.953145] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:10.424 [2024-11-20 16:49:28.953157] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:10.684 passed 00:04:10.684 Test: mem map registration ...[2024-11-20 16:49:29.017423] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:10.684 [2024-11-20 16:49:29.017492] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:10.684 passed 00:04:10.684 Test: mem map adjacent registrations ...passed 00:04:10.684 00:04:10.684 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.684 suites 1 1 n/a 0 0 00:04:10.684 tests 4 4 4 0 0 00:04:10.684 asserts 152 152 152 0 n/a 00:04:10.684 00:04:10.684 Elapsed time = 0.218 seconds 00:04:10.684 00:04:10.684 real 0m0.237s 00:04:10.684 user 0m0.218s 00:04:10.684 sys 0m0.015s 00:04:10.684 16:49:29 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.684 16:49:29 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:10.684 ************************************ 00:04:10.684 END TEST env_memory 00:04:10.684 ************************************ 00:04:10.684 16:49:29 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:10.684 16:49:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.684 16:49:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.684 16:49:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.684 ************************************ 00:04:10.684 START TEST env_vtophys 00:04:10.684 ************************************ 00:04:10.684 16:49:29 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:10.684 EAL: lib.eal log level changed from notice to debug 00:04:10.684 EAL: Detected lcore 0 as core 0 on socket 0 00:04:10.684 EAL: Detected lcore 1 as core 0 on socket 0 00:04:10.684 EAL: Detected lcore 2 as core 0 on socket 0 00:04:10.684 EAL: Detected lcore 3 as core 0 on socket 0 00:04:10.684 EAL: Detected lcore 4 as core 0 on socket 0 00:04:10.684 EAL: Detected lcore 5 as core 0 on socket 0 00:04:10.684 EAL: Detected lcore 6 as core 0 on socket 0 00:04:10.684 EAL: Detected lcore 7 as core 0 on socket 0 00:04:10.684 EAL: Detected lcore 8 as core 0 on socket 0 00:04:10.684 EAL: Detected lcore 9 as core 0 on socket 0 00:04:10.684 EAL: Maximum logical cores by configuration: 128 00:04:10.684 EAL: Detected CPU lcores: 10 00:04:10.684 EAL: Detected NUMA nodes: 1 00:04:10.684 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:10.684 EAL: Detected shared linkage of DPDK 00:04:10.684 EAL: No shared files mode enabled, IPC will be disabled 00:04:10.684 EAL: Selected IOVA mode 'PA' 00:04:10.684 EAL: Probing VFIO support... 00:04:10.684 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:10.684 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:10.684 EAL: Ask a virtual area of 0x2e000 bytes 00:04:10.684 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:10.684 EAL: Setting up physically contiguous memory... 00:04:10.684 EAL: Setting maximum number of open files to 524288 00:04:10.684 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:10.684 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:10.684 EAL: Ask a virtual area of 0x61000 bytes 00:04:10.684 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:10.684 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:10.684 EAL: Ask a virtual area of 0x400000000 bytes 00:04:10.684 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:10.684 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:10.684 EAL: Ask a virtual area of 0x61000 bytes 00:04:10.684 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:10.684 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:10.684 EAL: Ask a virtual area of 0x400000000 bytes 00:04:10.684 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:10.684 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:10.684 EAL: Ask a virtual area of 0x61000 bytes 00:04:10.684 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:10.684 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:10.684 EAL: Ask a virtual area of 0x400000000 bytes 00:04:10.684 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:10.684 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:10.684 EAL: Ask a virtual area of 0x61000 bytes 00:04:10.684 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:10.684 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:10.684 EAL: Ask a virtual area of 0x400000000 bytes 00:04:10.684 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:10.684 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:10.685 EAL: Hugepages will be freed exactly as allocated. 00:04:10.685 EAL: No shared files mode enabled, IPC is disabled 00:04:10.685 EAL: No shared files mode enabled, IPC is disabled 00:04:10.943 EAL: TSC frequency is ~2200000 KHz 00:04:10.943 EAL: Main lcore 0 is ready (tid=7fb6db272a00;cpuset=[0]) 00:04:10.943 EAL: Trying to obtain current memory policy. 00:04:10.943 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.943 EAL: Restoring previous memory policy: 0 00:04:10.943 EAL: request: mp_malloc_sync 00:04:10.943 EAL: No shared files mode enabled, IPC is disabled 00:04:10.943 EAL: Heap on socket 0 was expanded by 2MB 00:04:10.943 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:10.943 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:10.943 EAL: Mem event callback 'spdk:(nil)' registered 00:04:10.943 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:10.943 00:04:10.943 00:04:10.943 CUnit - A unit testing framework for C - Version 2.1-3 00:04:10.943 http://cunit.sourceforge.net/ 00:04:10.943 00:04:10.943 00:04:10.943 Suite: components_suite 00:04:10.943 Test: vtophys_malloc_test ...passed 00:04:10.943 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:10.943 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.943 EAL: Restoring previous memory policy: 4 00:04:10.943 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.943 EAL: request: mp_malloc_sync 00:04:10.943 EAL: No shared files mode enabled, IPC is disabled 00:04:10.943 EAL: Heap on socket 0 was expanded by 4MB 00:04:10.943 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.943 EAL: request: mp_malloc_sync 00:04:10.943 EAL: No shared files mode enabled, IPC is disabled 00:04:10.943 EAL: Heap on socket 0 was shrunk by 4MB 00:04:10.943 EAL: Trying to obtain current memory policy. 00:04:10.943 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.943 EAL: Restoring previous memory policy: 4 00:04:10.943 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.943 EAL: request: mp_malloc_sync 00:04:10.943 EAL: No shared files mode enabled, IPC is disabled 00:04:10.943 EAL: Heap on socket 0 was expanded by 6MB 00:04:10.943 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.943 EAL: request: mp_malloc_sync 00:04:10.943 EAL: No shared files mode enabled, IPC is disabled 00:04:10.943 EAL: Heap on socket 0 was shrunk by 6MB 00:04:10.943 EAL: Trying to obtain current memory policy. 00:04:10.943 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.943 EAL: Restoring previous memory policy: 4 00:04:10.943 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.943 EAL: request: mp_malloc_sync 00:04:10.943 EAL: No shared files mode enabled, IPC is disabled 00:04:10.943 EAL: Heap on socket 0 was expanded by 10MB 00:04:10.943 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.943 EAL: request: mp_malloc_sync 00:04:10.943 EAL: No shared files mode enabled, IPC is disabled 00:04:10.943 EAL: Heap on socket 0 was shrunk by 10MB 00:04:10.943 EAL: Trying to obtain current memory policy. 00:04:10.943 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.943 EAL: Restoring previous memory policy: 4 00:04:10.943 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.943 EAL: request: mp_malloc_sync 00:04:10.943 EAL: No shared files mode enabled, IPC is disabled 00:04:10.943 EAL: Heap on socket 0 was expanded by 18MB 00:04:10.943 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.943 EAL: request: mp_malloc_sync 00:04:10.943 EAL: No shared files mode enabled, IPC is disabled 00:04:10.943 EAL: Heap on socket 0 was shrunk by 18MB 00:04:10.943 EAL: Trying to obtain current memory policy. 00:04:10.943 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.943 EAL: Restoring previous memory policy: 4 00:04:10.943 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.943 EAL: request: mp_malloc_sync 00:04:10.943 EAL: No shared files mode enabled, IPC is disabled 00:04:10.943 EAL: Heap on socket 0 was expanded by 34MB 00:04:10.943 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.943 EAL: request: mp_malloc_sync 00:04:10.943 EAL: No shared files mode enabled, IPC is disabled 00:04:10.943 EAL: Heap on socket 0 was shrunk by 34MB 00:04:10.943 EAL: Trying to obtain current memory policy. 00:04:10.943 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.943 EAL: Restoring previous memory policy: 4 00:04:10.943 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.943 EAL: request: mp_malloc_sync 00:04:10.943 EAL: No shared files mode enabled, IPC is disabled 00:04:10.943 EAL: Heap on socket 0 was expanded by 66MB 00:04:10.943 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.943 EAL: request: mp_malloc_sync 00:04:10.943 EAL: No shared files mode enabled, IPC is disabled 00:04:10.943 EAL: Heap on socket 0 was shrunk by 66MB 00:04:10.943 EAL: Trying to obtain current memory policy. 00:04:10.943 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.943 EAL: Restoring previous memory policy: 4 00:04:10.943 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.943 EAL: request: mp_malloc_sync 00:04:10.943 EAL: No shared files mode enabled, IPC is disabled 00:04:10.943 EAL: Heap on socket 0 was expanded by 130MB 00:04:10.943 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.201 EAL: request: mp_malloc_sync 00:04:11.201 EAL: No shared files mode enabled, IPC is disabled 00:04:11.201 EAL: Heap on socket 0 was shrunk by 130MB 00:04:11.201 EAL: Trying to obtain current memory policy. 00:04:11.201 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.201 EAL: Restoring previous memory policy: 4 00:04:11.201 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.201 EAL: request: mp_malloc_sync 00:04:11.202 EAL: No shared files mode enabled, IPC is disabled 00:04:11.202 EAL: Heap on socket 0 was expanded by 258MB 00:04:11.202 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.202 EAL: request: mp_malloc_sync 00:04:11.202 EAL: No shared files mode enabled, IPC is disabled 00:04:11.202 EAL: Heap on socket 0 was shrunk by 258MB 00:04:11.202 EAL: Trying to obtain current memory policy. 00:04:11.202 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.460 EAL: Restoring previous memory policy: 4 00:04:11.460 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.460 EAL: request: mp_malloc_sync 00:04:11.460 EAL: No shared files mode enabled, IPC is disabled 00:04:11.460 EAL: Heap on socket 0 was expanded by 514MB 00:04:11.460 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.460 EAL: request: mp_malloc_sync 00:04:11.460 EAL: No shared files mode enabled, IPC is disabled 00:04:11.460 EAL: Heap on socket 0 was shrunk by 514MB 00:04:11.460 EAL: Trying to obtain current memory policy. 00:04:11.460 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.719 EAL: Restoring previous memory policy: 4 00:04:11.719 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.719 EAL: request: mp_malloc_sync 00:04:11.719 EAL: No shared files mode enabled, IPC is disabled 00:04:11.719 EAL: Heap on socket 0 was expanded by 1026MB 00:04:11.979 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.238 passed 00:04:12.238 00:04:12.238 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.238 suites 1 1 n/a 0 0 00:04:12.238 tests 2 2 2 0 0 00:04:12.238 asserts 5421 5421 5421 0 n/a 00:04:12.238 00:04:12.238 Elapsed time = 1.236 seconds 00:04:12.238 EAL: request: mp_malloc_sync 00:04:12.238 EAL: No shared files mode enabled, IPC is disabled 00:04:12.238 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:12.238 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.238 EAL: request: mp_malloc_sync 00:04:12.238 EAL: No shared files mode enabled, IPC is disabled 00:04:12.238 EAL: Heap on socket 0 was shrunk by 2MB 00:04:12.238 EAL: No shared files mode enabled, IPC is disabled 00:04:12.238 EAL: No shared files mode enabled, IPC is disabled 00:04:12.238 EAL: No shared files mode enabled, IPC is disabled 00:04:12.238 00:04:12.238 real 0m1.449s 00:04:12.238 user 0m0.793s 00:04:12.238 sys 0m0.523s 00:04:12.238 16:49:30 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.238 16:49:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:12.238 ************************************ 00:04:12.238 END TEST env_vtophys 00:04:12.238 ************************************ 00:04:12.238 16:49:30 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:12.238 16:49:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.238 16:49:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.238 16:49:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.238 ************************************ 00:04:12.238 START TEST env_pci 00:04:12.238 ************************************ 00:04:12.238 16:49:30 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:12.238 00:04:12.238 00:04:12.238 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.238 http://cunit.sourceforge.net/ 00:04:12.238 00:04:12.238 00:04:12.238 Suite: pci 00:04:12.238 Test: pci_hook ...[2024-11-20 16:49:30.672349] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56611 has claimed it 00:04:12.238 passed 00:04:12.238 00:04:12.238 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.238 suites 1 1 n/a 0 0 00:04:12.238 tests 1 1 1 0 0 00:04:12.238 asserts 25 25 25 0 n/a 00:04:12.238 00:04:12.238 Elapsed time = 0.002 seconds 00:04:12.238 EAL: Cannot find device (10000:00:01.0) 00:04:12.238 EAL: Failed to attach device on primary process 00:04:12.238 00:04:12.238 real 0m0.019s 00:04:12.238 user 0m0.006s 00:04:12.238 sys 0m0.013s 00:04:12.238 16:49:30 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.238 16:49:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:12.238 ************************************ 00:04:12.238 END TEST env_pci 00:04:12.238 ************************************ 00:04:12.238 16:49:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:12.238 16:49:30 env -- env/env.sh@15 -- # uname 00:04:12.239 16:49:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:12.239 16:49:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:12.239 16:49:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:12.239 16:49:30 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:12.239 16:49:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.239 16:49:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.239 ************************************ 00:04:12.239 START TEST env_dpdk_post_init 00:04:12.239 ************************************ 00:04:12.239 16:49:30 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:12.239 EAL: Detected CPU lcores: 10 00:04:12.239 EAL: Detected NUMA nodes: 1 00:04:12.239 EAL: Detected shared linkage of DPDK 00:04:12.239 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:12.498 EAL: Selected IOVA mode 'PA' 00:04:12.498 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:12.498 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:12.498 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:12.498 Starting DPDK initialization... 00:04:12.498 Starting SPDK post initialization... 00:04:12.498 SPDK NVMe probe 00:04:12.498 Attaching to 0000:00:10.0 00:04:12.498 Attaching to 0000:00:11.0 00:04:12.498 Attached to 0000:00:10.0 00:04:12.498 Attached to 0000:00:11.0 00:04:12.498 Cleaning up... 00:04:12.498 00:04:12.498 real 0m0.192s 00:04:12.498 user 0m0.058s 00:04:12.498 sys 0m0.034s 00:04:12.498 16:49:30 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.498 16:49:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:12.498 ************************************ 00:04:12.498 END TEST env_dpdk_post_init 00:04:12.498 ************************************ 00:04:12.498 16:49:30 env -- env/env.sh@26 -- # uname 00:04:12.498 16:49:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:12.498 16:49:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:12.498 16:49:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.498 16:49:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.498 16:49:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.498 ************************************ 00:04:12.498 START TEST env_mem_callbacks 00:04:12.498 ************************************ 00:04:12.498 16:49:30 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:12.498 EAL: Detected CPU lcores: 10 00:04:12.498 EAL: Detected NUMA nodes: 1 00:04:12.498 EAL: Detected shared linkage of DPDK 00:04:12.498 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:12.498 EAL: Selected IOVA mode 'PA' 00:04:12.757 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:12.757 00:04:12.757 00:04:12.757 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.757 http://cunit.sourceforge.net/ 00:04:12.757 00:04:12.757 00:04:12.757 Suite: memory 00:04:12.757 Test: test ... 00:04:12.757 register 0x200000200000 2097152 00:04:12.757 malloc 3145728 00:04:12.757 register 0x200000400000 4194304 00:04:12.757 buf 0x200000500000 len 3145728 PASSED 00:04:12.757 malloc 64 00:04:12.757 buf 0x2000004fff40 len 64 PASSED 00:04:12.757 malloc 4194304 00:04:12.757 register 0x200000800000 6291456 00:04:12.757 buf 0x200000a00000 len 4194304 PASSED 00:04:12.757 free 0x200000500000 3145728 00:04:12.757 free 0x2000004fff40 64 00:04:12.757 unregister 0x200000400000 4194304 PASSED 00:04:12.757 free 0x200000a00000 4194304 00:04:12.757 unregister 0x200000800000 6291456 PASSED 00:04:12.757 malloc 8388608 00:04:12.757 register 0x200000400000 10485760 00:04:12.757 buf 0x200000600000 len 8388608 PASSED 00:04:12.757 free 0x200000600000 8388608 00:04:12.757 unregister 0x200000400000 10485760 PASSED 00:04:12.757 passed 00:04:12.757 00:04:12.757 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.757 suites 1 1 n/a 0 0 00:04:12.757 tests 1 1 1 0 0 00:04:12.757 asserts 15 15 15 0 n/a 00:04:12.757 00:04:12.757 Elapsed time = 0.007 seconds 00:04:12.757 ************************************ 00:04:12.757 END TEST env_mem_callbacks 00:04:12.757 ************************************ 00:04:12.757 00:04:12.757 real 0m0.136s 00:04:12.757 user 0m0.013s 00:04:12.757 sys 0m0.022s 00:04:12.757 16:49:31 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.757 16:49:31 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:12.757 ************************************ 00:04:12.757 END TEST env 00:04:12.757 ************************************ 00:04:12.757 00:04:12.757 real 0m2.485s 00:04:12.757 user 0m1.287s 00:04:12.757 sys 0m0.851s 00:04:12.757 16:49:31 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.757 16:49:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.757 16:49:31 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:12.757 16:49:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.757 16:49:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.757 16:49:31 -- common/autotest_common.sh@10 -- # set +x 00:04:12.757 ************************************ 00:04:12.757 START TEST rpc 00:04:12.757 ************************************ 00:04:12.757 16:49:31 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:12.757 * Looking for test storage... 00:04:12.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:12.757 16:49:31 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:12.757 16:49:31 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:12.757 16:49:31 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:13.016 16:49:31 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:13.016 16:49:31 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:13.016 16:49:31 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:13.016 16:49:31 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:13.016 16:49:31 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.016 16:49:31 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:13.016 16:49:31 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:13.016 16:49:31 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:13.016 16:49:31 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:13.016 16:49:31 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:13.016 16:49:31 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:13.016 16:49:31 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:13.016 16:49:31 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:13.016 16:49:31 rpc -- scripts/common.sh@345 -- # : 1 00:04:13.016 16:49:31 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:13.016 16:49:31 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.016 16:49:31 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:13.016 16:49:31 rpc -- scripts/common.sh@353 -- # local d=1 00:04:13.016 16:49:31 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.016 16:49:31 rpc -- scripts/common.sh@355 -- # echo 1 00:04:13.016 16:49:31 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:13.016 16:49:31 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:13.016 16:49:31 rpc -- scripts/common.sh@353 -- # local d=2 00:04:13.016 16:49:31 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.016 16:49:31 rpc -- scripts/common.sh@355 -- # echo 2 00:04:13.016 16:49:31 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:13.016 16:49:31 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:13.016 16:49:31 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:13.016 16:49:31 rpc -- scripts/common.sh@368 -- # return 0 00:04:13.016 16:49:31 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.016 16:49:31 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:13.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.016 --rc genhtml_branch_coverage=1 00:04:13.016 --rc genhtml_function_coverage=1 00:04:13.016 --rc genhtml_legend=1 00:04:13.016 --rc geninfo_all_blocks=1 00:04:13.016 --rc geninfo_unexecuted_blocks=1 00:04:13.016 00:04:13.016 ' 00:04:13.016 16:49:31 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:13.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.016 --rc genhtml_branch_coverage=1 00:04:13.016 --rc genhtml_function_coverage=1 00:04:13.016 --rc genhtml_legend=1 00:04:13.016 --rc geninfo_all_blocks=1 00:04:13.017 --rc geninfo_unexecuted_blocks=1 00:04:13.017 00:04:13.017 ' 00:04:13.017 16:49:31 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:13.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.017 --rc genhtml_branch_coverage=1 00:04:13.017 --rc genhtml_function_coverage=1 00:04:13.017 --rc genhtml_legend=1 00:04:13.017 --rc geninfo_all_blocks=1 00:04:13.017 --rc geninfo_unexecuted_blocks=1 00:04:13.017 00:04:13.017 ' 00:04:13.017 16:49:31 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:13.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.017 --rc genhtml_branch_coverage=1 00:04:13.017 --rc genhtml_function_coverage=1 00:04:13.017 --rc genhtml_legend=1 00:04:13.017 --rc geninfo_all_blocks=1 00:04:13.017 --rc geninfo_unexecuted_blocks=1 00:04:13.017 00:04:13.017 ' 00:04:13.017 16:49:31 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56729 00:04:13.017 16:49:31 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:13.017 16:49:31 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.017 16:49:31 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56729 00:04:13.017 16:49:31 rpc -- common/autotest_common.sh@835 -- # '[' -z 56729 ']' 00:04:13.017 16:49:31 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.017 16:49:31 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.017 16:49:31 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.017 16:49:31 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.017 16:49:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.017 [2024-11-20 16:49:31.464549] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:13.017 [2024-11-20 16:49:31.465024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56729 ] 00:04:13.276 [2024-11-20 16:49:31.616666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.276 [2024-11-20 16:49:31.691079] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:13.276 [2024-11-20 16:49:31.691157] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56729' to capture a snapshot of events at runtime. 00:04:13.276 [2024-11-20 16:49:31.691174] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:13.276 [2024-11-20 16:49:31.691185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:13.276 [2024-11-20 16:49:31.691195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56729 for offline analysis/debug. 00:04:13.276 [2024-11-20 16:49:31.691712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.276 [2024-11-20 16:49:31.766497] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:14.213 16:49:32 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:14.213 16:49:32 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:14.213 16:49:32 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:14.213 16:49:32 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:14.213 16:49:32 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:14.213 16:49:32 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:14.213 16:49:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.213 16:49:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.213 16:49:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.213 ************************************ 00:04:14.213 START TEST rpc_integrity 00:04:14.213 ************************************ 00:04:14.213 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:14.213 16:49:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:14.213 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.213 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.213 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.213 16:49:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:14.213 16:49:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:14.213 16:49:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:14.213 16:49:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:14.213 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.213 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.213 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.213 16:49:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:14.213 16:49:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:14.213 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.213 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.213 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.213 16:49:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:14.213 { 00:04:14.213 "name": "Malloc0", 00:04:14.213 "aliases": [ 00:04:14.213 "9d70e696-c2c0-46f6-b558-712ac4de7def" 00:04:14.213 ], 00:04:14.213 "product_name": "Malloc disk", 00:04:14.213 "block_size": 512, 00:04:14.213 "num_blocks": 16384, 00:04:14.213 "uuid": "9d70e696-c2c0-46f6-b558-712ac4de7def", 00:04:14.213 "assigned_rate_limits": { 00:04:14.213 "rw_ios_per_sec": 0, 00:04:14.213 "rw_mbytes_per_sec": 0, 00:04:14.213 "r_mbytes_per_sec": 0, 00:04:14.213 "w_mbytes_per_sec": 0 00:04:14.213 }, 00:04:14.213 "claimed": false, 00:04:14.213 "zoned": false, 00:04:14.213 "supported_io_types": { 00:04:14.213 "read": true, 00:04:14.213 "write": true, 00:04:14.213 "unmap": true, 00:04:14.213 "flush": true, 00:04:14.213 "reset": true, 00:04:14.213 "nvme_admin": false, 00:04:14.213 "nvme_io": false, 00:04:14.213 "nvme_io_md": false, 00:04:14.213 "write_zeroes": true, 00:04:14.213 "zcopy": true, 00:04:14.213 "get_zone_info": false, 00:04:14.213 "zone_management": false, 00:04:14.213 "zone_append": false, 00:04:14.213 "compare": false, 00:04:14.213 "compare_and_write": false, 00:04:14.213 "abort": true, 00:04:14.213 "seek_hole": false, 00:04:14.213 "seek_data": false, 00:04:14.213 "copy": true, 00:04:14.213 "nvme_iov_md": false 00:04:14.213 }, 00:04:14.213 "memory_domains": [ 00:04:14.213 { 00:04:14.213 "dma_device_id": "system", 00:04:14.213 "dma_device_type": 1 00:04:14.213 }, 00:04:14.213 { 00:04:14.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.213 "dma_device_type": 2 00:04:14.214 } 00:04:14.214 ], 00:04:14.214 "driver_specific": {} 00:04:14.214 } 00:04:14.214 ]' 00:04:14.214 16:49:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:14.214 16:49:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:14.214 16:49:32 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:14.214 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.214 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.214 [2024-11-20 16:49:32.716621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:14.214 [2024-11-20 16:49:32.716684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:14.214 [2024-11-20 16:49:32.716705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xecc050 00:04:14.214 [2024-11-20 16:49:32.716715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:14.214 [2024-11-20 16:49:32.718419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:14.214 [2024-11-20 16:49:32.718454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:14.214 Passthru0 00:04:14.214 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.214 16:49:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:14.214 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.214 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.474 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.474 16:49:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:14.474 { 00:04:14.474 "name": "Malloc0", 00:04:14.474 "aliases": [ 00:04:14.474 "9d70e696-c2c0-46f6-b558-712ac4de7def" 00:04:14.474 ], 00:04:14.474 "product_name": "Malloc disk", 00:04:14.474 "block_size": 512, 00:04:14.474 "num_blocks": 16384, 00:04:14.474 "uuid": "9d70e696-c2c0-46f6-b558-712ac4de7def", 00:04:14.474 "assigned_rate_limits": { 00:04:14.474 "rw_ios_per_sec": 0, 00:04:14.474 "rw_mbytes_per_sec": 0, 00:04:14.474 "r_mbytes_per_sec": 0, 00:04:14.474 "w_mbytes_per_sec": 0 00:04:14.474 }, 00:04:14.474 "claimed": true, 00:04:14.474 "claim_type": "exclusive_write", 00:04:14.474 "zoned": false, 00:04:14.474 "supported_io_types": { 00:04:14.474 "read": true, 00:04:14.474 "write": true, 00:04:14.474 "unmap": true, 00:04:14.474 "flush": true, 00:04:14.474 "reset": true, 00:04:14.474 "nvme_admin": false, 00:04:14.474 "nvme_io": false, 00:04:14.474 "nvme_io_md": false, 00:04:14.474 "write_zeroes": true, 00:04:14.474 "zcopy": true, 00:04:14.474 "get_zone_info": false, 00:04:14.474 "zone_management": false, 00:04:14.474 "zone_append": false, 00:04:14.474 "compare": false, 00:04:14.474 "compare_and_write": false, 00:04:14.474 "abort": true, 00:04:14.474 "seek_hole": false, 00:04:14.474 "seek_data": false, 00:04:14.474 "copy": true, 00:04:14.474 "nvme_iov_md": false 00:04:14.474 }, 00:04:14.474 "memory_domains": [ 00:04:14.474 { 00:04:14.474 "dma_device_id": "system", 00:04:14.474 "dma_device_type": 1 00:04:14.474 }, 00:04:14.474 { 00:04:14.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.474 "dma_device_type": 2 00:04:14.474 } 00:04:14.474 ], 00:04:14.474 "driver_specific": {} 00:04:14.474 }, 00:04:14.474 { 00:04:14.474 "name": "Passthru0", 00:04:14.474 "aliases": [ 00:04:14.474 "b64c0fc7-7d83-5faf-bd41-d27cd415d7eb" 00:04:14.474 ], 00:04:14.474 "product_name": "passthru", 00:04:14.474 "block_size": 512, 00:04:14.474 "num_blocks": 16384, 00:04:14.474 "uuid": "b64c0fc7-7d83-5faf-bd41-d27cd415d7eb", 00:04:14.474 "assigned_rate_limits": { 00:04:14.474 "rw_ios_per_sec": 0, 00:04:14.474 "rw_mbytes_per_sec": 0, 00:04:14.474 "r_mbytes_per_sec": 0, 00:04:14.474 "w_mbytes_per_sec": 0 00:04:14.474 }, 00:04:14.474 "claimed": false, 00:04:14.474 "zoned": false, 00:04:14.474 "supported_io_types": { 00:04:14.474 "read": true, 00:04:14.474 "write": true, 00:04:14.474 "unmap": true, 00:04:14.474 "flush": true, 00:04:14.474 "reset": true, 00:04:14.474 "nvme_admin": false, 00:04:14.474 "nvme_io": false, 00:04:14.474 "nvme_io_md": false, 00:04:14.474 "write_zeroes": true, 00:04:14.474 "zcopy": true, 00:04:14.474 "get_zone_info": false, 00:04:14.474 "zone_management": false, 00:04:14.474 "zone_append": false, 00:04:14.474 "compare": false, 00:04:14.474 "compare_and_write": false, 00:04:14.474 "abort": true, 00:04:14.474 "seek_hole": false, 00:04:14.474 "seek_data": false, 00:04:14.474 "copy": true, 00:04:14.474 "nvme_iov_md": false 00:04:14.474 }, 00:04:14.474 "memory_domains": [ 00:04:14.474 { 00:04:14.474 "dma_device_id": "system", 00:04:14.474 "dma_device_type": 1 00:04:14.474 }, 00:04:14.474 { 00:04:14.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.474 "dma_device_type": 2 00:04:14.474 } 00:04:14.474 ], 00:04:14.474 "driver_specific": { 00:04:14.474 "passthru": { 00:04:14.474 "name": "Passthru0", 00:04:14.474 "base_bdev_name": "Malloc0" 00:04:14.474 } 00:04:14.474 } 00:04:14.474 } 00:04:14.474 ]' 00:04:14.474 16:49:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:14.474 16:49:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:14.474 16:49:32 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:14.474 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.474 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.474 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.474 16:49:32 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:14.474 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.474 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.474 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.474 16:49:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:14.474 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.474 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.474 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.474 16:49:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:14.474 16:49:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:14.474 16:49:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:14.474 00:04:14.474 real 0m0.316s 00:04:14.474 user 0m0.200s 00:04:14.474 sys 0m0.052s 00:04:14.474 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.474 16:49:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.474 ************************************ 00:04:14.474 END TEST rpc_integrity 00:04:14.475 ************************************ 00:04:14.475 16:49:32 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:14.475 16:49:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.475 16:49:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.475 16:49:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.475 ************************************ 00:04:14.475 START TEST rpc_plugins 00:04:14.475 ************************************ 00:04:14.475 16:49:32 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:14.475 16:49:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:14.475 16:49:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.475 16:49:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.475 16:49:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.475 16:49:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:14.475 16:49:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:14.475 16:49:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.475 16:49:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.475 16:49:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.475 16:49:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:14.475 { 00:04:14.475 "name": "Malloc1", 00:04:14.475 "aliases": [ 00:04:14.475 "7f1a4023-fcb6-4210-957c-531b2177f51d" 00:04:14.475 ], 00:04:14.475 "product_name": "Malloc disk", 00:04:14.475 "block_size": 4096, 00:04:14.475 "num_blocks": 256, 00:04:14.475 "uuid": "7f1a4023-fcb6-4210-957c-531b2177f51d", 00:04:14.475 "assigned_rate_limits": { 00:04:14.475 "rw_ios_per_sec": 0, 00:04:14.475 "rw_mbytes_per_sec": 0, 00:04:14.475 "r_mbytes_per_sec": 0, 00:04:14.475 "w_mbytes_per_sec": 0 00:04:14.475 }, 00:04:14.475 "claimed": false, 00:04:14.475 "zoned": false, 00:04:14.475 "supported_io_types": { 00:04:14.475 "read": true, 00:04:14.475 "write": true, 00:04:14.475 "unmap": true, 00:04:14.475 "flush": true, 00:04:14.475 "reset": true, 00:04:14.475 "nvme_admin": false, 00:04:14.475 "nvme_io": false, 00:04:14.475 "nvme_io_md": false, 00:04:14.475 "write_zeroes": true, 00:04:14.475 "zcopy": true, 00:04:14.475 "get_zone_info": false, 00:04:14.475 "zone_management": false, 00:04:14.475 "zone_append": false, 00:04:14.475 "compare": false, 00:04:14.475 "compare_and_write": false, 00:04:14.475 "abort": true, 00:04:14.475 "seek_hole": false, 00:04:14.475 "seek_data": false, 00:04:14.475 "copy": true, 00:04:14.475 "nvme_iov_md": false 00:04:14.475 }, 00:04:14.475 "memory_domains": [ 00:04:14.475 { 00:04:14.475 "dma_device_id": "system", 00:04:14.475 "dma_device_type": 1 00:04:14.475 }, 00:04:14.475 { 00:04:14.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.475 "dma_device_type": 2 00:04:14.475 } 00:04:14.475 ], 00:04:14.475 "driver_specific": {} 00:04:14.475 } 00:04:14.475 ]' 00:04:14.475 16:49:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:14.475 16:49:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:14.475 16:49:33 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:14.475 16:49:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.475 16:49:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.734 16:49:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.734 16:49:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:14.734 16:49:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.734 16:49:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.734 16:49:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.734 16:49:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:14.734 16:49:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:14.734 16:49:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:14.734 00:04:14.734 real 0m0.150s 00:04:14.734 user 0m0.104s 00:04:14.734 sys 0m0.018s 00:04:14.734 16:49:33 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.734 16:49:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.734 ************************************ 00:04:14.734 END TEST rpc_plugins 00:04:14.734 ************************************ 00:04:14.734 16:49:33 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:14.735 16:49:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.735 16:49:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.735 16:49:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.735 ************************************ 00:04:14.735 START TEST rpc_trace_cmd_test 00:04:14.735 ************************************ 00:04:14.735 16:49:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:14.735 16:49:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:14.735 16:49:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:14.735 16:49:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.735 16:49:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:14.735 16:49:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.735 16:49:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:14.735 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56729", 00:04:14.735 "tpoint_group_mask": "0x8", 00:04:14.735 "iscsi_conn": { 00:04:14.735 "mask": "0x2", 00:04:14.735 "tpoint_mask": "0x0" 00:04:14.735 }, 00:04:14.735 "scsi": { 00:04:14.735 "mask": "0x4", 00:04:14.735 "tpoint_mask": "0x0" 00:04:14.735 }, 00:04:14.735 "bdev": { 00:04:14.735 "mask": "0x8", 00:04:14.735 "tpoint_mask": "0xffffffffffffffff" 00:04:14.735 }, 00:04:14.735 "nvmf_rdma": { 00:04:14.735 "mask": "0x10", 00:04:14.735 "tpoint_mask": "0x0" 00:04:14.735 }, 00:04:14.735 "nvmf_tcp": { 00:04:14.735 "mask": "0x20", 00:04:14.735 "tpoint_mask": "0x0" 00:04:14.735 }, 00:04:14.735 "ftl": { 00:04:14.735 "mask": "0x40", 00:04:14.735 "tpoint_mask": "0x0" 00:04:14.735 }, 00:04:14.735 "blobfs": { 00:04:14.735 "mask": "0x80", 00:04:14.735 "tpoint_mask": "0x0" 00:04:14.735 }, 00:04:14.735 "dsa": { 00:04:14.735 "mask": "0x200", 00:04:14.735 "tpoint_mask": "0x0" 00:04:14.735 }, 00:04:14.735 "thread": { 00:04:14.735 "mask": "0x400", 00:04:14.735 "tpoint_mask": "0x0" 00:04:14.735 }, 00:04:14.735 "nvme_pcie": { 00:04:14.735 "mask": "0x800", 00:04:14.735 "tpoint_mask": "0x0" 00:04:14.735 }, 00:04:14.735 "iaa": { 00:04:14.735 "mask": "0x1000", 00:04:14.735 "tpoint_mask": "0x0" 00:04:14.735 }, 00:04:14.735 "nvme_tcp": { 00:04:14.735 "mask": "0x2000", 00:04:14.735 "tpoint_mask": "0x0" 00:04:14.735 }, 00:04:14.735 "bdev_nvme": { 00:04:14.735 "mask": "0x4000", 00:04:14.735 "tpoint_mask": "0x0" 00:04:14.735 }, 00:04:14.735 "sock": { 00:04:14.735 "mask": "0x8000", 00:04:14.735 "tpoint_mask": "0x0" 00:04:14.735 }, 00:04:14.735 "blob": { 00:04:14.735 "mask": "0x10000", 00:04:14.735 "tpoint_mask": "0x0" 00:04:14.735 }, 00:04:14.735 "bdev_raid": { 00:04:14.735 "mask": "0x20000", 00:04:14.735 "tpoint_mask": "0x0" 00:04:14.735 }, 00:04:14.735 "scheduler": { 00:04:14.735 "mask": "0x40000", 00:04:14.735 "tpoint_mask": "0x0" 00:04:14.735 } 00:04:14.735 }' 00:04:14.735 16:49:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:14.735 16:49:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:14.735 16:49:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:14.735 16:49:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:14.735 16:49:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:14.994 16:49:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:14.994 16:49:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:14.994 16:49:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:14.994 16:49:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:14.994 16:49:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:14.994 00:04:14.994 real 0m0.273s 00:04:14.994 user 0m0.235s 00:04:14.994 sys 0m0.030s 00:04:14.994 16:49:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.994 16:49:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:14.994 ************************************ 00:04:14.994 END TEST rpc_trace_cmd_test 00:04:14.994 ************************************ 00:04:14.994 16:49:33 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:14.994 16:49:33 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:14.994 16:49:33 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:14.994 16:49:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.994 16:49:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.994 16:49:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.994 ************************************ 00:04:14.994 START TEST rpc_daemon_integrity 00:04:14.994 ************************************ 00:04:14.994 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:14.994 16:49:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:14.994 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.994 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.994 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.994 16:49:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:14.994 16:49:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:14.994 16:49:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:14.994 16:49:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:14.994 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.994 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.994 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.994 16:49:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:14.994 16:49:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:14.994 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.994 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.266 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.266 16:49:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:15.266 { 00:04:15.266 "name": "Malloc2", 00:04:15.266 "aliases": [ 00:04:15.266 "d347ec6c-3d72-4c46-8556-6bde4233fd7e" 00:04:15.266 ], 00:04:15.266 "product_name": "Malloc disk", 00:04:15.266 "block_size": 512, 00:04:15.266 "num_blocks": 16384, 00:04:15.266 "uuid": "d347ec6c-3d72-4c46-8556-6bde4233fd7e", 00:04:15.266 "assigned_rate_limits": { 00:04:15.266 "rw_ios_per_sec": 0, 00:04:15.266 "rw_mbytes_per_sec": 0, 00:04:15.266 "r_mbytes_per_sec": 0, 00:04:15.266 "w_mbytes_per_sec": 0 00:04:15.266 }, 00:04:15.266 "claimed": false, 00:04:15.266 "zoned": false, 00:04:15.266 "supported_io_types": { 00:04:15.266 "read": true, 00:04:15.266 "write": true, 00:04:15.266 "unmap": true, 00:04:15.266 "flush": true, 00:04:15.266 "reset": true, 00:04:15.266 "nvme_admin": false, 00:04:15.266 "nvme_io": false, 00:04:15.266 "nvme_io_md": false, 00:04:15.266 "write_zeroes": true, 00:04:15.266 "zcopy": true, 00:04:15.266 "get_zone_info": false, 00:04:15.266 "zone_management": false, 00:04:15.266 "zone_append": false, 00:04:15.266 "compare": false, 00:04:15.266 "compare_and_write": false, 00:04:15.266 "abort": true, 00:04:15.266 "seek_hole": false, 00:04:15.266 "seek_data": false, 00:04:15.266 "copy": true, 00:04:15.266 "nvme_iov_md": false 00:04:15.266 }, 00:04:15.266 "memory_domains": [ 00:04:15.266 { 00:04:15.266 "dma_device_id": "system", 00:04:15.266 "dma_device_type": 1 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.266 "dma_device_type": 2 00:04:15.266 } 00:04:15.266 ], 00:04:15.266 "driver_specific": {} 00:04:15.266 } 00:04:15.266 ]' 00:04:15.266 16:49:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:15.266 16:49:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:15.266 16:49:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:15.266 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.266 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.266 [2024-11-20 16:49:33.591660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:15.266 [2024-11-20 16:49:33.591721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:15.266 [2024-11-20 16:49:33.591743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xed7030 00:04:15.266 [2024-11-20 16:49:33.591753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:15.266 [2024-11-20 16:49:33.593435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:15.266 [2024-11-20 16:49:33.593469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:15.266 Passthru0 00:04:15.266 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.266 16:49:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:15.266 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.266 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.266 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.266 16:49:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:15.266 { 00:04:15.266 "name": "Malloc2", 00:04:15.266 "aliases": [ 00:04:15.266 "d347ec6c-3d72-4c46-8556-6bde4233fd7e" 00:04:15.266 ], 00:04:15.266 "product_name": "Malloc disk", 00:04:15.266 "block_size": 512, 00:04:15.266 "num_blocks": 16384, 00:04:15.266 "uuid": "d347ec6c-3d72-4c46-8556-6bde4233fd7e", 00:04:15.266 "assigned_rate_limits": { 00:04:15.266 "rw_ios_per_sec": 0, 00:04:15.266 "rw_mbytes_per_sec": 0, 00:04:15.266 "r_mbytes_per_sec": 0, 00:04:15.266 "w_mbytes_per_sec": 0 00:04:15.266 }, 00:04:15.266 "claimed": true, 00:04:15.266 "claim_type": "exclusive_write", 00:04:15.266 "zoned": false, 00:04:15.266 "supported_io_types": { 00:04:15.266 "read": true, 00:04:15.266 "write": true, 00:04:15.266 "unmap": true, 00:04:15.266 "flush": true, 00:04:15.266 "reset": true, 00:04:15.266 "nvme_admin": false, 00:04:15.266 "nvme_io": false, 00:04:15.266 "nvme_io_md": false, 00:04:15.266 "write_zeroes": true, 00:04:15.266 "zcopy": true, 00:04:15.266 "get_zone_info": false, 00:04:15.266 "zone_management": false, 00:04:15.266 "zone_append": false, 00:04:15.266 "compare": false, 00:04:15.266 "compare_and_write": false, 00:04:15.266 "abort": true, 00:04:15.266 "seek_hole": false, 00:04:15.266 "seek_data": false, 00:04:15.266 "copy": true, 00:04:15.266 "nvme_iov_md": false 00:04:15.266 }, 00:04:15.266 "memory_domains": [ 00:04:15.266 { 00:04:15.266 "dma_device_id": "system", 00:04:15.266 "dma_device_type": 1 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.266 "dma_device_type": 2 00:04:15.266 } 00:04:15.266 ], 00:04:15.266 "driver_specific": {} 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "name": "Passthru0", 00:04:15.266 "aliases": [ 00:04:15.266 "e0e32786-4bef-570c-9116-9a74d3b0ffd2" 00:04:15.266 ], 00:04:15.266 "product_name": "passthru", 00:04:15.266 "block_size": 512, 00:04:15.266 "num_blocks": 16384, 00:04:15.266 "uuid": "e0e32786-4bef-570c-9116-9a74d3b0ffd2", 00:04:15.266 "assigned_rate_limits": { 00:04:15.266 "rw_ios_per_sec": 0, 00:04:15.266 "rw_mbytes_per_sec": 0, 00:04:15.266 "r_mbytes_per_sec": 0, 00:04:15.266 "w_mbytes_per_sec": 0 00:04:15.266 }, 00:04:15.266 "claimed": false, 00:04:15.266 "zoned": false, 00:04:15.266 "supported_io_types": { 00:04:15.266 "read": true, 00:04:15.266 "write": true, 00:04:15.266 "unmap": true, 00:04:15.266 "flush": true, 00:04:15.266 "reset": true, 00:04:15.266 "nvme_admin": false, 00:04:15.267 "nvme_io": false, 00:04:15.267 "nvme_io_md": false, 00:04:15.267 "write_zeroes": true, 00:04:15.267 "zcopy": true, 00:04:15.267 "get_zone_info": false, 00:04:15.267 "zone_management": false, 00:04:15.267 "zone_append": false, 00:04:15.267 "compare": false, 00:04:15.267 "compare_and_write": false, 00:04:15.267 "abort": true, 00:04:15.267 "seek_hole": false, 00:04:15.267 "seek_data": false, 00:04:15.267 "copy": true, 00:04:15.267 "nvme_iov_md": false 00:04:15.267 }, 00:04:15.267 "memory_domains": [ 00:04:15.267 { 00:04:15.267 "dma_device_id": "system", 00:04:15.267 "dma_device_type": 1 00:04:15.267 }, 00:04:15.267 { 00:04:15.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.267 "dma_device_type": 2 00:04:15.267 } 00:04:15.267 ], 00:04:15.267 "driver_specific": { 00:04:15.267 "passthru": { 00:04:15.267 "name": "Passthru0", 00:04:15.267 "base_bdev_name": "Malloc2" 00:04:15.267 } 00:04:15.267 } 00:04:15.267 } 00:04:15.267 ]' 00:04:15.267 16:49:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:15.267 16:49:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:15.267 16:49:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:15.267 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.267 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.267 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.267 16:49:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:15.267 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.267 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.267 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.267 16:49:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:15.267 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.267 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.267 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.267 16:49:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:15.267 16:49:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:15.267 16:49:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:15.267 00:04:15.267 real 0m0.309s 00:04:15.267 user 0m0.208s 00:04:15.267 sys 0m0.034s 00:04:15.267 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.267 ************************************ 00:04:15.267 16:49:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.267 END TEST rpc_daemon_integrity 00:04:15.267 ************************************ 00:04:15.267 16:49:33 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:15.267 16:49:33 rpc -- rpc/rpc.sh@84 -- # killprocess 56729 00:04:15.267 16:49:33 rpc -- common/autotest_common.sh@954 -- # '[' -z 56729 ']' 00:04:15.267 16:49:33 rpc -- common/autotest_common.sh@958 -- # kill -0 56729 00:04:15.267 16:49:33 rpc -- common/autotest_common.sh@959 -- # uname 00:04:15.267 16:49:33 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:15.267 16:49:33 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56729 00:04:15.596 16:49:33 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:15.596 16:49:33 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:15.597 killing process with pid 56729 00:04:15.597 16:49:33 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56729' 00:04:15.597 16:49:33 rpc -- common/autotest_common.sh@973 -- # kill 56729 00:04:15.597 16:49:33 rpc -- common/autotest_common.sh@978 -- # wait 56729 00:04:15.877 00:04:15.877 real 0m3.002s 00:04:15.877 user 0m3.897s 00:04:15.877 sys 0m0.718s 00:04:15.877 16:49:34 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.877 16:49:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.877 ************************************ 00:04:15.877 END TEST rpc 00:04:15.877 ************************************ 00:04:15.877 16:49:34 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:15.877 16:49:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.877 16:49:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.877 16:49:34 -- common/autotest_common.sh@10 -- # set +x 00:04:15.877 ************************************ 00:04:15.877 START TEST skip_rpc 00:04:15.877 ************************************ 00:04:15.877 16:49:34 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:15.877 * Looking for test storage... 00:04:15.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:15.877 16:49:34 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:15.877 16:49:34 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:15.877 16:49:34 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:16.136 16:49:34 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.136 16:49:34 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:16.136 16:49:34 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.136 16:49:34 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:16.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.136 --rc genhtml_branch_coverage=1 00:04:16.136 --rc genhtml_function_coverage=1 00:04:16.136 --rc genhtml_legend=1 00:04:16.136 --rc geninfo_all_blocks=1 00:04:16.136 --rc geninfo_unexecuted_blocks=1 00:04:16.136 00:04:16.136 ' 00:04:16.136 16:49:34 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:16.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.136 --rc genhtml_branch_coverage=1 00:04:16.136 --rc genhtml_function_coverage=1 00:04:16.136 --rc genhtml_legend=1 00:04:16.136 --rc geninfo_all_blocks=1 00:04:16.136 --rc geninfo_unexecuted_blocks=1 00:04:16.136 00:04:16.136 ' 00:04:16.136 16:49:34 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:16.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.136 --rc genhtml_branch_coverage=1 00:04:16.136 --rc genhtml_function_coverage=1 00:04:16.136 --rc genhtml_legend=1 00:04:16.136 --rc geninfo_all_blocks=1 00:04:16.136 --rc geninfo_unexecuted_blocks=1 00:04:16.136 00:04:16.136 ' 00:04:16.136 16:49:34 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:16.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.136 --rc genhtml_branch_coverage=1 00:04:16.136 --rc genhtml_function_coverage=1 00:04:16.136 --rc genhtml_legend=1 00:04:16.136 --rc geninfo_all_blocks=1 00:04:16.136 --rc geninfo_unexecuted_blocks=1 00:04:16.136 00:04:16.136 ' 00:04:16.136 16:49:34 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:16.136 16:49:34 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:16.136 16:49:34 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:16.136 16:49:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.136 16:49:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.136 16:49:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.136 ************************************ 00:04:16.136 START TEST skip_rpc 00:04:16.136 ************************************ 00:04:16.136 16:49:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:16.136 16:49:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56935 00:04:16.136 16:49:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:16.136 16:49:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.136 16:49:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:16.136 [2024-11-20 16:49:34.513774] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:16.136 [2024-11-20 16:49:34.513940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56935 ] 00:04:16.136 [2024-11-20 16:49:34.662425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.396 [2024-11-20 16:49:34.718470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.396 [2024-11-20 16:49:34.793967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56935 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56935 ']' 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56935 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56935 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:21.671 killing process with pid 56935 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56935' 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56935 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56935 00:04:21.671 00:04:21.671 real 0m5.421s 00:04:21.671 user 0m5.035s 00:04:21.671 sys 0m0.298s 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.671 ************************************ 00:04:21.671 END TEST skip_rpc 00:04:21.671 ************************************ 00:04:21.671 16:49:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.671 16:49:39 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:21.671 16:49:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.671 16:49:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.671 16:49:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.671 ************************************ 00:04:21.671 START TEST skip_rpc_with_json 00:04:21.671 ************************************ 00:04:21.671 16:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:21.671 16:49:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:21.671 16:49:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57020 00:04:21.671 16:49:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.671 16:49:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:21.671 16:49:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57020 00:04:21.671 16:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57020 ']' 00:04:21.671 16:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.671 16:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.671 16:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.671 16:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.671 16:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.671 [2024-11-20 16:49:39.972837] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:21.671 [2024-11-20 16:49:39.972940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57020 ] 00:04:21.671 [2024-11-20 16:49:40.110676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.671 [2024-11-20 16:49:40.162992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.930 [2024-11-20 16:49:40.232367] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:21.930 16:49:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.930 16:49:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:21.930 16:49:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:21.930 16:49:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.930 16:49:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.930 [2024-11-20 16:49:40.440301] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:21.930 request: 00:04:21.930 { 00:04:21.930 "trtype": "tcp", 00:04:21.930 "method": "nvmf_get_transports", 00:04:21.930 "req_id": 1 00:04:21.930 } 00:04:21.930 Got JSON-RPC error response 00:04:21.930 response: 00:04:21.930 { 00:04:21.930 "code": -19, 00:04:21.930 "message": "No such device" 00:04:21.930 } 00:04:21.930 16:49:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:21.930 16:49:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:21.930 16:49:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.930 16:49:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.930 [2024-11-20 16:49:40.452410] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:21.930 16:49:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.930 16:49:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:21.930 16:49:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.930 16:49:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.190 16:49:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.190 16:49:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:22.190 { 00:04:22.190 "subsystems": [ 00:04:22.190 { 00:04:22.190 "subsystem": "fsdev", 00:04:22.190 "config": [ 00:04:22.190 { 00:04:22.190 "method": "fsdev_set_opts", 00:04:22.190 "params": { 00:04:22.190 "fsdev_io_pool_size": 65535, 00:04:22.190 "fsdev_io_cache_size": 256 00:04:22.190 } 00:04:22.190 } 00:04:22.190 ] 00:04:22.190 }, 00:04:22.190 { 00:04:22.190 "subsystem": "keyring", 00:04:22.190 "config": [] 00:04:22.190 }, 00:04:22.190 { 00:04:22.190 "subsystem": "iobuf", 00:04:22.190 "config": [ 00:04:22.190 { 00:04:22.190 "method": "iobuf_set_options", 00:04:22.190 "params": { 00:04:22.190 "small_pool_count": 8192, 00:04:22.190 "large_pool_count": 1024, 00:04:22.190 "small_bufsize": 8192, 00:04:22.190 "large_bufsize": 135168, 00:04:22.190 "enable_numa": false 00:04:22.190 } 00:04:22.190 } 00:04:22.190 ] 00:04:22.190 }, 00:04:22.190 { 00:04:22.190 "subsystem": "sock", 00:04:22.190 "config": [ 00:04:22.190 { 00:04:22.190 "method": "sock_set_default_impl", 00:04:22.190 "params": { 00:04:22.190 "impl_name": "uring" 00:04:22.190 } 00:04:22.190 }, 00:04:22.190 { 00:04:22.190 "method": "sock_impl_set_options", 00:04:22.190 "params": { 00:04:22.190 "impl_name": "ssl", 00:04:22.190 "recv_buf_size": 4096, 00:04:22.190 "send_buf_size": 4096, 00:04:22.190 "enable_recv_pipe": true, 00:04:22.190 "enable_quickack": false, 00:04:22.190 "enable_placement_id": 0, 00:04:22.190 "enable_zerocopy_send_server": true, 00:04:22.190 "enable_zerocopy_send_client": false, 00:04:22.190 "zerocopy_threshold": 0, 00:04:22.190 "tls_version": 0, 00:04:22.190 "enable_ktls": false 00:04:22.190 } 00:04:22.190 }, 00:04:22.190 { 00:04:22.190 "method": "sock_impl_set_options", 00:04:22.190 "params": { 00:04:22.190 "impl_name": "posix", 00:04:22.190 "recv_buf_size": 2097152, 00:04:22.190 "send_buf_size": 2097152, 00:04:22.190 "enable_recv_pipe": true, 00:04:22.190 "enable_quickack": false, 00:04:22.190 "enable_placement_id": 0, 00:04:22.190 "enable_zerocopy_send_server": true, 00:04:22.190 "enable_zerocopy_send_client": false, 00:04:22.190 "zerocopy_threshold": 0, 00:04:22.190 "tls_version": 0, 00:04:22.190 "enable_ktls": false 00:04:22.190 } 00:04:22.190 }, 00:04:22.190 { 00:04:22.190 "method": "sock_impl_set_options", 00:04:22.190 "params": { 00:04:22.190 "impl_name": "uring", 00:04:22.190 "recv_buf_size": 2097152, 00:04:22.190 "send_buf_size": 2097152, 00:04:22.190 "enable_recv_pipe": true, 00:04:22.190 "enable_quickack": false, 00:04:22.190 "enable_placement_id": 0, 00:04:22.190 "enable_zerocopy_send_server": false, 00:04:22.190 "enable_zerocopy_send_client": false, 00:04:22.190 "zerocopy_threshold": 0, 00:04:22.190 "tls_version": 0, 00:04:22.190 "enable_ktls": false 00:04:22.190 } 00:04:22.190 } 00:04:22.190 ] 00:04:22.190 }, 00:04:22.190 { 00:04:22.190 "subsystem": "vmd", 00:04:22.190 "config": [] 00:04:22.190 }, 00:04:22.190 { 00:04:22.190 "subsystem": "accel", 00:04:22.190 "config": [ 00:04:22.190 { 00:04:22.190 "method": "accel_set_options", 00:04:22.190 "params": { 00:04:22.190 "small_cache_size": 128, 00:04:22.190 "large_cache_size": 16, 00:04:22.190 "task_count": 2048, 00:04:22.190 "sequence_count": 2048, 00:04:22.190 "buf_count": 2048 00:04:22.190 } 00:04:22.190 } 00:04:22.190 ] 00:04:22.190 }, 00:04:22.190 { 00:04:22.190 "subsystem": "bdev", 00:04:22.190 "config": [ 00:04:22.190 { 00:04:22.190 "method": "bdev_set_options", 00:04:22.190 "params": { 00:04:22.190 "bdev_io_pool_size": 65535, 00:04:22.190 "bdev_io_cache_size": 256, 00:04:22.190 "bdev_auto_examine": true, 00:04:22.190 "iobuf_small_cache_size": 128, 00:04:22.190 "iobuf_large_cache_size": 16 00:04:22.190 } 00:04:22.190 }, 00:04:22.190 { 00:04:22.190 "method": "bdev_raid_set_options", 00:04:22.190 "params": { 00:04:22.190 "process_window_size_kb": 1024, 00:04:22.190 "process_max_bandwidth_mb_sec": 0 00:04:22.190 } 00:04:22.190 }, 00:04:22.190 { 00:04:22.190 "method": "bdev_iscsi_set_options", 00:04:22.190 "params": { 00:04:22.190 "timeout_sec": 30 00:04:22.190 } 00:04:22.190 }, 00:04:22.190 { 00:04:22.190 "method": "bdev_nvme_set_options", 00:04:22.190 "params": { 00:04:22.190 "action_on_timeout": "none", 00:04:22.190 "timeout_us": 0, 00:04:22.190 "timeout_admin_us": 0, 00:04:22.190 "keep_alive_timeout_ms": 10000, 00:04:22.190 "arbitration_burst": 0, 00:04:22.190 "low_priority_weight": 0, 00:04:22.190 "medium_priority_weight": 0, 00:04:22.190 "high_priority_weight": 0, 00:04:22.190 "nvme_adminq_poll_period_us": 10000, 00:04:22.190 "nvme_ioq_poll_period_us": 0, 00:04:22.190 "io_queue_requests": 0, 00:04:22.190 "delay_cmd_submit": true, 00:04:22.190 "transport_retry_count": 4, 00:04:22.190 "bdev_retry_count": 3, 00:04:22.190 "transport_ack_timeout": 0, 00:04:22.190 "ctrlr_loss_timeout_sec": 0, 00:04:22.190 "reconnect_delay_sec": 0, 00:04:22.190 "fast_io_fail_timeout_sec": 0, 00:04:22.190 "disable_auto_failback": false, 00:04:22.190 "generate_uuids": false, 00:04:22.190 "transport_tos": 0, 00:04:22.190 "nvme_error_stat": false, 00:04:22.190 "rdma_srq_size": 0, 00:04:22.190 "io_path_stat": false, 00:04:22.190 "allow_accel_sequence": false, 00:04:22.190 "rdma_max_cq_size": 0, 00:04:22.190 "rdma_cm_event_timeout_ms": 0, 00:04:22.190 "dhchap_digests": [ 00:04:22.190 "sha256", 00:04:22.190 "sha384", 00:04:22.190 "sha512" 00:04:22.190 ], 00:04:22.190 "dhchap_dhgroups": [ 00:04:22.190 "null", 00:04:22.190 "ffdhe2048", 00:04:22.190 "ffdhe3072", 00:04:22.190 "ffdhe4096", 00:04:22.190 "ffdhe6144", 00:04:22.190 "ffdhe8192" 00:04:22.190 ] 00:04:22.190 } 00:04:22.190 }, 00:04:22.190 { 00:04:22.190 "method": "bdev_nvme_set_hotplug", 00:04:22.190 "params": { 00:04:22.190 "period_us": 100000, 00:04:22.190 "enable": false 00:04:22.190 } 00:04:22.190 }, 00:04:22.190 { 00:04:22.190 "method": "bdev_wait_for_examine" 00:04:22.190 } 00:04:22.190 ] 00:04:22.190 }, 00:04:22.190 { 00:04:22.190 "subsystem": "scsi", 00:04:22.190 "config": null 00:04:22.190 }, 00:04:22.190 { 00:04:22.190 "subsystem": "scheduler", 00:04:22.190 "config": [ 00:04:22.190 { 00:04:22.190 "method": "framework_set_scheduler", 00:04:22.190 "params": { 00:04:22.190 "name": "static" 00:04:22.190 } 00:04:22.190 } 00:04:22.190 ] 00:04:22.190 }, 00:04:22.190 { 00:04:22.190 "subsystem": "vhost_scsi", 00:04:22.190 "config": [] 00:04:22.190 }, 00:04:22.190 { 00:04:22.190 "subsystem": "vhost_blk", 00:04:22.190 "config": [] 00:04:22.190 }, 00:04:22.190 { 00:04:22.190 "subsystem": "ublk", 00:04:22.190 "config": [] 00:04:22.190 }, 00:04:22.190 { 00:04:22.190 "subsystem": "nbd", 00:04:22.190 "config": [] 00:04:22.190 }, 00:04:22.190 { 00:04:22.190 "subsystem": "nvmf", 00:04:22.190 "config": [ 00:04:22.190 { 00:04:22.190 "method": "nvmf_set_config", 00:04:22.190 "params": { 00:04:22.190 "discovery_filter": "match_any", 00:04:22.190 "admin_cmd_passthru": { 00:04:22.190 "identify_ctrlr": false 00:04:22.190 }, 00:04:22.190 "dhchap_digests": [ 00:04:22.191 "sha256", 00:04:22.191 "sha384", 00:04:22.191 "sha512" 00:04:22.191 ], 00:04:22.191 "dhchap_dhgroups": [ 00:04:22.191 "null", 00:04:22.191 "ffdhe2048", 00:04:22.191 "ffdhe3072", 00:04:22.191 "ffdhe4096", 00:04:22.191 "ffdhe6144", 00:04:22.191 "ffdhe8192" 00:04:22.191 ] 00:04:22.191 } 00:04:22.191 }, 00:04:22.191 { 00:04:22.191 "method": "nvmf_set_max_subsystems", 00:04:22.191 "params": { 00:04:22.191 "max_subsystems": 1024 00:04:22.191 } 00:04:22.191 }, 00:04:22.191 { 00:04:22.191 "method": "nvmf_set_crdt", 00:04:22.191 "params": { 00:04:22.191 "crdt1": 0, 00:04:22.191 "crdt2": 0, 00:04:22.191 "crdt3": 0 00:04:22.191 } 00:04:22.191 }, 00:04:22.191 { 00:04:22.191 "method": "nvmf_create_transport", 00:04:22.191 "params": { 00:04:22.191 "trtype": "TCP", 00:04:22.191 "max_queue_depth": 128, 00:04:22.191 "max_io_qpairs_per_ctrlr": 127, 00:04:22.191 "in_capsule_data_size": 4096, 00:04:22.191 "max_io_size": 131072, 00:04:22.191 "io_unit_size": 131072, 00:04:22.191 "max_aq_depth": 128, 00:04:22.191 "num_shared_buffers": 511, 00:04:22.191 "buf_cache_size": 4294967295, 00:04:22.191 "dif_insert_or_strip": false, 00:04:22.191 "zcopy": false, 00:04:22.191 "c2h_success": true, 00:04:22.191 "sock_priority": 0, 00:04:22.191 "abort_timeout_sec": 1, 00:04:22.191 "ack_timeout": 0, 00:04:22.191 "data_wr_pool_size": 0 00:04:22.191 } 00:04:22.191 } 00:04:22.191 ] 00:04:22.191 }, 00:04:22.191 { 00:04:22.191 "subsystem": "iscsi", 00:04:22.191 "config": [ 00:04:22.191 { 00:04:22.191 "method": "iscsi_set_options", 00:04:22.191 "params": { 00:04:22.191 "node_base": "iqn.2016-06.io.spdk", 00:04:22.191 "max_sessions": 128, 00:04:22.191 "max_connections_per_session": 2, 00:04:22.191 "max_queue_depth": 64, 00:04:22.191 "default_time2wait": 2, 00:04:22.191 "default_time2retain": 20, 00:04:22.191 "first_burst_length": 8192, 00:04:22.191 "immediate_data": true, 00:04:22.191 "allow_duplicated_isid": false, 00:04:22.191 "error_recovery_level": 0, 00:04:22.191 "nop_timeout": 60, 00:04:22.191 "nop_in_interval": 30, 00:04:22.191 "disable_chap": false, 00:04:22.191 "require_chap": false, 00:04:22.191 "mutual_chap": false, 00:04:22.191 "chap_group": 0, 00:04:22.191 "max_large_datain_per_connection": 64, 00:04:22.191 "max_r2t_per_connection": 4, 00:04:22.191 "pdu_pool_size": 36864, 00:04:22.191 "immediate_data_pool_size": 16384, 00:04:22.191 "data_out_pool_size": 2048 00:04:22.191 } 00:04:22.191 } 00:04:22.191 ] 00:04:22.191 } 00:04:22.191 ] 00:04:22.191 } 00:04:22.191 16:49:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:22.191 16:49:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57020 00:04:22.191 16:49:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57020 ']' 00:04:22.191 16:49:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57020 00:04:22.191 16:49:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:22.191 16:49:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.191 16:49:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57020 00:04:22.191 killing process with pid 57020 00:04:22.191 16:49:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.191 16:49:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.191 16:49:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57020' 00:04:22.191 16:49:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57020 00:04:22.191 16:49:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57020 00:04:22.759 16:49:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57041 00:04:22.759 16:49:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:22.759 16:49:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:28.032 16:49:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57041 00:04:28.032 16:49:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57041 ']' 00:04:28.032 16:49:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57041 00:04:28.032 16:49:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:28.033 16:49:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.033 16:49:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57041 00:04:28.033 killing process with pid 57041 00:04:28.033 16:49:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.033 16:49:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.033 16:49:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57041' 00:04:28.033 16:49:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57041 00:04:28.033 16:49:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57041 00:04:28.033 16:49:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:28.033 16:49:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:28.033 00:04:28.033 real 0m6.570s 00:04:28.033 user 0m6.105s 00:04:28.033 sys 0m0.610s 00:04:28.033 16:49:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.033 ************************************ 00:04:28.033 END TEST skip_rpc_with_json 00:04:28.033 ************************************ 00:04:28.033 16:49:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.033 16:49:46 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:28.033 16:49:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.033 16:49:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.033 16:49:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.033 ************************************ 00:04:28.033 START TEST skip_rpc_with_delay 00:04:28.033 ************************************ 00:04:28.033 16:49:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:28.033 16:49:46 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:28.033 16:49:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:28.033 16:49:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:28.033 16:49:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:28.033 16:49:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.033 16:49:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:28.033 16:49:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.033 16:49:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:28.033 16:49:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.033 16:49:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:28.033 16:49:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:28.033 16:49:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:28.292 [2024-11-20 16:49:46.608947] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:28.292 16:49:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:28.292 16:49:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:28.292 16:49:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:28.292 ************************************ 00:04:28.292 END TEST skip_rpc_with_delay 00:04:28.292 ************************************ 00:04:28.292 16:49:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:28.292 00:04:28.292 real 0m0.093s 00:04:28.292 user 0m0.062s 00:04:28.292 sys 0m0.030s 00:04:28.292 16:49:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.292 16:49:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:28.292 16:49:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:28.292 16:49:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:28.292 16:49:46 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:28.292 16:49:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.292 16:49:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.292 16:49:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.292 ************************************ 00:04:28.292 START TEST exit_on_failed_rpc_init 00:04:28.292 ************************************ 00:04:28.292 16:49:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:28.292 16:49:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57151 00:04:28.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.292 16:49:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57151 00:04:28.292 16:49:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:28.292 16:49:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57151 ']' 00:04:28.292 16:49:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.292 16:49:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.292 16:49:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.292 16:49:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.292 16:49:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:28.292 [2024-11-20 16:49:46.746456] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:28.292 [2024-11-20 16:49:46.746555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57151 ] 00:04:28.551 [2024-11-20 16:49:46.889770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.551 [2024-11-20 16:49:46.947760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.551 [2024-11-20 16:49:47.018274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:28.810 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.810 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:28.810 16:49:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:28.810 16:49:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:28.810 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:28.810 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:28.810 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:28.810 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.810 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:28.810 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.810 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:28.810 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.810 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:28.810 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:28.810 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:28.810 [2024-11-20 16:49:47.294593] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:28.810 [2024-11-20 16:49:47.294713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57161 ] 00:04:29.076 [2024-11-20 16:49:47.441788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.076 [2024-11-20 16:49:47.500055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.076 [2024-11-20 16:49:47.500126] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:29.076 [2024-11-20 16:49:47.500141] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:29.076 [2024-11-20 16:49:47.500150] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:29.076 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:29.076 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:29.076 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:29.076 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:29.076 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:29.076 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:29.076 16:49:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:29.076 16:49:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57151 00:04:29.076 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57151 ']' 00:04:29.076 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57151 00:04:29.076 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:29.076 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.076 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57151 00:04:29.076 killing process with pid 57151 00:04:29.076 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.076 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.076 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57151' 00:04:29.076 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57151 00:04:29.076 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57151 00:04:29.651 ************************************ 00:04:29.651 END TEST exit_on_failed_rpc_init 00:04:29.651 ************************************ 00:04:29.651 00:04:29.651 real 0m1.267s 00:04:29.651 user 0m1.355s 00:04:29.651 sys 0m0.372s 00:04:29.651 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.651 16:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:29.651 16:49:47 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:29.651 00:04:29.651 real 0m13.752s 00:04:29.651 user 0m12.727s 00:04:29.651 sys 0m1.528s 00:04:29.651 16:49:47 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.651 ************************************ 00:04:29.651 END TEST skip_rpc 00:04:29.651 ************************************ 00:04:29.651 16:49:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.651 16:49:48 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:29.651 16:49:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.651 16:49:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.651 16:49:48 -- common/autotest_common.sh@10 -- # set +x 00:04:29.651 ************************************ 00:04:29.651 START TEST rpc_client 00:04:29.651 ************************************ 00:04:29.651 16:49:48 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:29.651 * Looking for test storage... 00:04:29.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:29.651 16:49:48 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:29.651 16:49:48 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:29.651 16:49:48 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:29.911 16:49:48 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:29.911 16:49:48 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.911 16:49:48 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.911 16:49:48 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.911 16:49:48 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.911 16:49:48 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.911 16:49:48 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.911 16:49:48 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.911 16:49:48 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.911 16:49:48 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.911 16:49:48 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.911 16:49:48 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.911 16:49:48 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:29.911 16:49:48 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:29.911 16:49:48 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.911 16:49:48 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.911 16:49:48 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:29.911 16:49:48 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:29.911 16:49:48 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.911 16:49:48 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:29.911 16:49:48 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.911 16:49:48 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:29.911 16:49:48 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:29.911 16:49:48 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.912 16:49:48 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:29.912 16:49:48 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.912 16:49:48 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.912 16:49:48 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.912 16:49:48 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:29.912 16:49:48 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.912 16:49:48 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:29.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.912 --rc genhtml_branch_coverage=1 00:04:29.912 --rc genhtml_function_coverage=1 00:04:29.912 --rc genhtml_legend=1 00:04:29.912 --rc geninfo_all_blocks=1 00:04:29.912 --rc geninfo_unexecuted_blocks=1 00:04:29.912 00:04:29.912 ' 00:04:29.912 16:49:48 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:29.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.912 --rc genhtml_branch_coverage=1 00:04:29.912 --rc genhtml_function_coverage=1 00:04:29.912 --rc genhtml_legend=1 00:04:29.912 --rc geninfo_all_blocks=1 00:04:29.912 --rc geninfo_unexecuted_blocks=1 00:04:29.912 00:04:29.912 ' 00:04:29.912 16:49:48 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:29.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.912 --rc genhtml_branch_coverage=1 00:04:29.912 --rc genhtml_function_coverage=1 00:04:29.912 --rc genhtml_legend=1 00:04:29.912 --rc geninfo_all_blocks=1 00:04:29.912 --rc geninfo_unexecuted_blocks=1 00:04:29.912 00:04:29.912 ' 00:04:29.912 16:49:48 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:29.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.912 --rc genhtml_branch_coverage=1 00:04:29.912 --rc genhtml_function_coverage=1 00:04:29.912 --rc genhtml_legend=1 00:04:29.912 --rc geninfo_all_blocks=1 00:04:29.912 --rc geninfo_unexecuted_blocks=1 00:04:29.912 00:04:29.912 ' 00:04:29.912 16:49:48 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:29.912 OK 00:04:29.912 16:49:48 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:29.912 00:04:29.912 real 0m0.199s 00:04:29.912 user 0m0.123s 00:04:29.912 sys 0m0.084s 00:04:29.912 16:49:48 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.912 16:49:48 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:29.912 ************************************ 00:04:29.912 END TEST rpc_client 00:04:29.912 ************************************ 00:04:29.912 16:49:48 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:29.912 16:49:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.912 16:49:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.912 16:49:48 -- common/autotest_common.sh@10 -- # set +x 00:04:29.912 ************************************ 00:04:29.912 START TEST json_config 00:04:29.912 ************************************ 00:04:29.912 16:49:48 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:29.912 16:49:48 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:29.912 16:49:48 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:29.912 16:49:48 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:29.912 16:49:48 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:29.912 16:49:48 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.912 16:49:48 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.912 16:49:48 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.912 16:49:48 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.912 16:49:48 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.912 16:49:48 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.912 16:49:48 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.912 16:49:48 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.912 16:49:48 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.912 16:49:48 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.912 16:49:48 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.912 16:49:48 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:29.912 16:49:48 json_config -- scripts/common.sh@345 -- # : 1 00:04:29.912 16:49:48 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.912 16:49:48 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.912 16:49:48 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:29.912 16:49:48 json_config -- scripts/common.sh@353 -- # local d=1 00:04:29.912 16:49:48 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.912 16:49:48 json_config -- scripts/common.sh@355 -- # echo 1 00:04:29.912 16:49:48 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.912 16:49:48 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:29.912 16:49:48 json_config -- scripts/common.sh@353 -- # local d=2 00:04:29.912 16:49:48 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.912 16:49:48 json_config -- scripts/common.sh@355 -- # echo 2 00:04:29.912 16:49:48 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.912 16:49:48 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.912 16:49:48 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.912 16:49:48 json_config -- scripts/common.sh@368 -- # return 0 00:04:29.912 16:49:48 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.172 16:49:48 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:30.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.172 --rc genhtml_branch_coverage=1 00:04:30.172 --rc genhtml_function_coverage=1 00:04:30.172 --rc genhtml_legend=1 00:04:30.172 --rc geninfo_all_blocks=1 00:04:30.172 --rc geninfo_unexecuted_blocks=1 00:04:30.172 00:04:30.172 ' 00:04:30.172 16:49:48 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:30.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.172 --rc genhtml_branch_coverage=1 00:04:30.172 --rc genhtml_function_coverage=1 00:04:30.172 --rc genhtml_legend=1 00:04:30.172 --rc geninfo_all_blocks=1 00:04:30.172 --rc geninfo_unexecuted_blocks=1 00:04:30.172 00:04:30.172 ' 00:04:30.172 16:49:48 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:30.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.172 --rc genhtml_branch_coverage=1 00:04:30.172 --rc genhtml_function_coverage=1 00:04:30.172 --rc genhtml_legend=1 00:04:30.172 --rc geninfo_all_blocks=1 00:04:30.172 --rc geninfo_unexecuted_blocks=1 00:04:30.172 00:04:30.172 ' 00:04:30.172 16:49:48 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:30.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.172 --rc genhtml_branch_coverage=1 00:04:30.172 --rc genhtml_function_coverage=1 00:04:30.172 --rc genhtml_legend=1 00:04:30.172 --rc geninfo_all_blocks=1 00:04:30.172 --rc geninfo_unexecuted_blocks=1 00:04:30.172 00:04:30.172 ' 00:04:30.172 16:49:48 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:30.172 16:49:48 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:30.172 16:49:48 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:30.172 16:49:48 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:30.172 16:49:48 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:30.172 16:49:48 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.172 16:49:48 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.172 16:49:48 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.172 16:49:48 json_config -- paths/export.sh@5 -- # export PATH 00:04:30.172 16:49:48 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@51 -- # : 0 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:30.172 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:30.172 16:49:48 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:30.172 16:49:48 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:30.172 16:49:48 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:30.172 16:49:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:30.172 16:49:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:30.172 16:49:48 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:30.172 16:49:48 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:30.172 16:49:48 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:30.172 16:49:48 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:30.172 16:49:48 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:30.173 16:49:48 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:30.173 16:49:48 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:30.173 16:49:48 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:30.173 16:49:48 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:30.173 16:49:48 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:30.173 16:49:48 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:30.173 INFO: JSON configuration test init 00:04:30.173 16:49:48 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:30.173 16:49:48 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:30.173 16:49:48 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:30.173 16:49:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.173 16:49:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.173 16:49:48 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:30.173 16:49:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.173 16:49:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.173 Waiting for target to run... 00:04:30.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:30.173 16:49:48 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:30.173 16:49:48 json_config -- json_config/common.sh@9 -- # local app=target 00:04:30.173 16:49:48 json_config -- json_config/common.sh@10 -- # shift 00:04:30.173 16:49:48 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:30.173 16:49:48 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:30.173 16:49:48 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:30.173 16:49:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.173 16:49:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.173 16:49:48 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57295 00:04:30.173 16:49:48 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:30.173 16:49:48 json_config -- json_config/common.sh@25 -- # waitforlisten 57295 /var/tmp/spdk_tgt.sock 00:04:30.173 16:49:48 json_config -- common/autotest_common.sh@835 -- # '[' -z 57295 ']' 00:04:30.173 16:49:48 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:30.173 16:49:48 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:30.173 16:49:48 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.173 16:49:48 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:30.173 16:49:48 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.173 16:49:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.173 [2024-11-20 16:49:48.573393] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:30.173 [2024-11-20 16:49:48.573722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57295 ] 00:04:30.740 [2024-11-20 16:49:49.023851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.740 [2024-11-20 16:49:49.080607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.307 16:49:49 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.307 16:49:49 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:31.307 16:49:49 json_config -- json_config/common.sh@26 -- # echo '' 00:04:31.307 00:04:31.307 16:49:49 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:31.307 16:49:49 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:31.307 16:49:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:31.307 16:49:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.307 16:49:49 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:31.308 16:49:49 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:31.308 16:49:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.308 16:49:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.308 16:49:49 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:31.308 16:49:49 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:31.308 16:49:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:31.566 [2024-11-20 16:49:49.920701] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:31.825 16:49:50 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:31.825 16:49:50 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:31.825 16:49:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:31.825 16:49:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.825 16:49:50 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:31.825 16:49:50 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:31.825 16:49:50 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:31.825 16:49:50 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:31.825 16:49:50 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:31.825 16:49:50 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:31.825 16:49:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:31.825 16:49:50 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:32.084 16:49:50 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:32.084 16:49:50 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:32.084 16:49:50 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:32.084 16:49:50 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:32.084 16:49:50 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:32.084 16:49:50 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:32.084 16:49:50 json_config -- json_config/json_config.sh@54 -- # sort 00:04:32.084 16:49:50 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:32.084 16:49:50 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:32.084 16:49:50 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:32.084 16:49:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:32.084 16:49:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.084 16:49:50 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:32.084 16:49:50 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:32.085 16:49:50 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:32.085 16:49:50 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:32.085 16:49:50 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:32.085 16:49:50 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:32.085 16:49:50 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:32.085 16:49:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:32.085 16:49:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.085 16:49:50 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:32.085 16:49:50 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:32.085 16:49:50 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:32.085 16:49:50 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:32.085 16:49:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:32.343 MallocForNvmf0 00:04:32.343 16:49:50 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:32.343 16:49:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:32.602 MallocForNvmf1 00:04:32.602 16:49:50 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:32.602 16:49:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:32.860 [2024-11-20 16:49:51.229162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:32.860 16:49:51 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:32.860 16:49:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:33.119 16:49:51 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:33.119 16:49:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:33.378 16:49:51 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:33.378 16:49:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:33.637 16:49:52 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:33.637 16:49:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:33.896 [2024-11-20 16:49:52.241822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:33.896 16:49:52 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:33.896 16:49:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:33.896 16:49:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.896 16:49:52 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:33.896 16:49:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:33.896 16:49:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.896 16:49:52 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:33.896 16:49:52 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:33.896 16:49:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:34.155 MallocBdevForConfigChangeCheck 00:04:34.155 16:49:52 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:34.155 16:49:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:34.155 16:49:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.155 16:49:52 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:34.155 16:49:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:34.722 INFO: shutting down applications... 00:04:34.722 16:49:53 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:34.722 16:49:53 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:34.722 16:49:53 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:34.722 16:49:53 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:34.722 16:49:53 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:34.981 Calling clear_iscsi_subsystem 00:04:34.981 Calling clear_nvmf_subsystem 00:04:34.981 Calling clear_nbd_subsystem 00:04:34.981 Calling clear_ublk_subsystem 00:04:34.981 Calling clear_vhost_blk_subsystem 00:04:34.981 Calling clear_vhost_scsi_subsystem 00:04:34.981 Calling clear_bdev_subsystem 00:04:34.981 16:49:53 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:34.981 16:49:53 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:34.981 16:49:53 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:34.981 16:49:53 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:34.981 16:49:53 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:34.981 16:49:53 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:35.549 16:49:53 json_config -- json_config/json_config.sh@352 -- # break 00:04:35.549 16:49:53 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:35.549 16:49:53 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:35.549 16:49:53 json_config -- json_config/common.sh@31 -- # local app=target 00:04:35.549 16:49:53 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:35.549 16:49:53 json_config -- json_config/common.sh@35 -- # [[ -n 57295 ]] 00:04:35.549 16:49:53 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57295 00:04:35.549 16:49:53 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:35.549 16:49:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:35.549 16:49:53 json_config -- json_config/common.sh@41 -- # kill -0 57295 00:04:35.549 16:49:53 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:36.115 16:49:54 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:36.115 16:49:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:36.115 16:49:54 json_config -- json_config/common.sh@41 -- # kill -0 57295 00:04:36.115 16:49:54 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:36.115 16:49:54 json_config -- json_config/common.sh@43 -- # break 00:04:36.115 16:49:54 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:36.115 SPDK target shutdown done 00:04:36.115 16:49:54 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:36.115 INFO: relaunching applications... 00:04:36.115 16:49:54 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:36.115 16:49:54 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:36.115 16:49:54 json_config -- json_config/common.sh@9 -- # local app=target 00:04:36.115 16:49:54 json_config -- json_config/common.sh@10 -- # shift 00:04:36.115 16:49:54 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:36.115 16:49:54 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:36.115 16:49:54 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:36.115 16:49:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.115 16:49:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.115 16:49:54 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57491 00:04:36.115 16:49:54 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:36.115 16:49:54 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:36.115 Waiting for target to run... 00:04:36.115 16:49:54 json_config -- json_config/common.sh@25 -- # waitforlisten 57491 /var/tmp/spdk_tgt.sock 00:04:36.115 16:49:54 json_config -- common/autotest_common.sh@835 -- # '[' -z 57491 ']' 00:04:36.115 16:49:54 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:36.116 16:49:54 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:36.116 16:49:54 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:36.116 16:49:54 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.116 16:49:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.116 [2024-11-20 16:49:54.447981] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:36.116 [2024-11-20 16:49:54.448088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57491 ] 00:04:36.374 [2024-11-20 16:49:54.889229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.632 [2024-11-20 16:49:54.941097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.632 [2024-11-20 16:49:55.079432] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:36.897 [2024-11-20 16:49:55.298307] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:36.897 [2024-11-20 16:49:55.330375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:37.156 16:49:55 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.157 16:49:55 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:37.157 00:04:37.157 16:49:55 json_config -- json_config/common.sh@26 -- # echo '' 00:04:37.157 16:49:55 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:37.157 INFO: Checking if target configuration is the same... 00:04:37.157 16:49:55 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:37.157 16:49:55 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:37.157 16:49:55 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:37.157 16:49:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:37.157 + '[' 2 -ne 2 ']' 00:04:37.157 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:37.157 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:37.157 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:37.157 +++ basename /dev/fd/62 00:04:37.157 ++ mktemp /tmp/62.XXX 00:04:37.157 + tmp_file_1=/tmp/62.T3U 00:04:37.157 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:37.157 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:37.157 + tmp_file_2=/tmp/spdk_tgt_config.json.vLF 00:04:37.157 + ret=0 00:04:37.157 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:37.725 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:37.725 + diff -u /tmp/62.T3U /tmp/spdk_tgt_config.json.vLF 00:04:37.725 INFO: JSON config files are the same 00:04:37.725 + echo 'INFO: JSON config files are the same' 00:04:37.725 + rm /tmp/62.T3U /tmp/spdk_tgt_config.json.vLF 00:04:37.725 + exit 0 00:04:37.725 INFO: changing configuration and checking if this can be detected... 00:04:37.725 16:49:56 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:37.725 16:49:56 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:37.725 16:49:56 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:37.725 16:49:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:37.983 16:49:56 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:37.984 16:49:56 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:37.984 16:49:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:37.984 + '[' 2 -ne 2 ']' 00:04:37.984 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:37.984 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:37.984 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:37.984 +++ basename /dev/fd/62 00:04:37.984 ++ mktemp /tmp/62.XXX 00:04:37.984 + tmp_file_1=/tmp/62.xT8 00:04:37.984 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:37.984 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:37.984 + tmp_file_2=/tmp/spdk_tgt_config.json.j6W 00:04:37.984 + ret=0 00:04:37.984 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:38.242 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:38.501 + diff -u /tmp/62.xT8 /tmp/spdk_tgt_config.json.j6W 00:04:38.501 + ret=1 00:04:38.501 + echo '=== Start of file: /tmp/62.xT8 ===' 00:04:38.501 + cat /tmp/62.xT8 00:04:38.501 + echo '=== End of file: /tmp/62.xT8 ===' 00:04:38.501 + echo '' 00:04:38.501 + echo '=== Start of file: /tmp/spdk_tgt_config.json.j6W ===' 00:04:38.501 + cat /tmp/spdk_tgt_config.json.j6W 00:04:38.501 + echo '=== End of file: /tmp/spdk_tgt_config.json.j6W ===' 00:04:38.501 + echo '' 00:04:38.501 + rm /tmp/62.xT8 /tmp/spdk_tgt_config.json.j6W 00:04:38.501 + exit 1 00:04:38.501 INFO: configuration change detected. 00:04:38.501 16:49:56 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:38.501 16:49:56 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:38.501 16:49:56 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:38.501 16:49:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:38.501 16:49:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.501 16:49:56 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:38.501 16:49:56 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:38.501 16:49:56 json_config -- json_config/json_config.sh@324 -- # [[ -n 57491 ]] 00:04:38.501 16:49:56 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:38.501 16:49:56 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:38.501 16:49:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:38.501 16:49:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.501 16:49:56 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:38.501 16:49:56 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:38.501 16:49:56 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:38.501 16:49:56 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:38.501 16:49:56 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:38.501 16:49:56 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:38.502 16:49:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:38.502 16:49:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.502 16:49:56 json_config -- json_config/json_config.sh@330 -- # killprocess 57491 00:04:38.502 16:49:56 json_config -- common/autotest_common.sh@954 -- # '[' -z 57491 ']' 00:04:38.502 16:49:56 json_config -- common/autotest_common.sh@958 -- # kill -0 57491 00:04:38.502 16:49:56 json_config -- common/autotest_common.sh@959 -- # uname 00:04:38.502 16:49:56 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.502 16:49:56 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57491 00:04:38.502 killing process with pid 57491 00:04:38.502 16:49:56 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.502 16:49:56 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.502 16:49:56 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57491' 00:04:38.502 16:49:56 json_config -- common/autotest_common.sh@973 -- # kill 57491 00:04:38.502 16:49:56 json_config -- common/autotest_common.sh@978 -- # wait 57491 00:04:38.761 16:49:57 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:38.761 16:49:57 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:38.761 16:49:57 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:38.761 16:49:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.761 INFO: Success 00:04:38.761 16:49:57 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:38.761 16:49:57 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:38.761 ************************************ 00:04:38.761 END TEST json_config 00:04:38.761 ************************************ 00:04:38.761 00:04:38.761 real 0m8.914s 00:04:38.761 user 0m12.856s 00:04:38.761 sys 0m1.796s 00:04:38.761 16:49:57 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.761 16:49:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.761 16:49:57 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:38.761 16:49:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.761 16:49:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.761 16:49:57 -- common/autotest_common.sh@10 -- # set +x 00:04:38.761 ************************************ 00:04:38.761 START TEST json_config_extra_key 00:04:38.761 ************************************ 00:04:38.761 16:49:57 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:39.021 16:49:57 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.021 16:49:57 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.021 16:49:57 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.021 16:49:57 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.021 16:49:57 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:39.021 16:49:57 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.021 16:49:57 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.021 --rc genhtml_branch_coverage=1 00:04:39.021 --rc genhtml_function_coverage=1 00:04:39.021 --rc genhtml_legend=1 00:04:39.021 --rc geninfo_all_blocks=1 00:04:39.021 --rc geninfo_unexecuted_blocks=1 00:04:39.021 00:04:39.021 ' 00:04:39.021 16:49:57 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.021 --rc genhtml_branch_coverage=1 00:04:39.021 --rc genhtml_function_coverage=1 00:04:39.021 --rc genhtml_legend=1 00:04:39.021 --rc geninfo_all_blocks=1 00:04:39.021 --rc geninfo_unexecuted_blocks=1 00:04:39.021 00:04:39.021 ' 00:04:39.021 16:49:57 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.021 --rc genhtml_branch_coverage=1 00:04:39.021 --rc genhtml_function_coverage=1 00:04:39.021 --rc genhtml_legend=1 00:04:39.021 --rc geninfo_all_blocks=1 00:04:39.021 --rc geninfo_unexecuted_blocks=1 00:04:39.021 00:04:39.022 ' 00:04:39.022 16:49:57 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.022 --rc genhtml_branch_coverage=1 00:04:39.022 --rc genhtml_function_coverage=1 00:04:39.022 --rc genhtml_legend=1 00:04:39.022 --rc geninfo_all_blocks=1 00:04:39.022 --rc geninfo_unexecuted_blocks=1 00:04:39.022 00:04:39.022 ' 00:04:39.022 16:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:39.022 16:49:57 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:39.022 16:49:57 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:39.022 16:49:57 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:39.022 16:49:57 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:39.022 16:49:57 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.022 16:49:57 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.022 16:49:57 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.022 16:49:57 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:39.022 16:49:57 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:39.022 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:39.022 16:49:57 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:39.022 16:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:39.022 16:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:39.022 16:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:39.022 16:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:39.022 16:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:39.022 16:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:39.022 16:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:39.022 16:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:39.022 INFO: launching applications... 00:04:39.022 16:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:39.022 16:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:39.022 16:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:39.022 16:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:39.022 16:49:57 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:39.022 16:49:57 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:39.022 16:49:57 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:39.022 16:49:57 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:39.022 16:49:57 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:39.022 16:49:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.022 16:49:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.022 16:49:57 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57645 00:04:39.022 16:49:57 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:39.022 Waiting for target to run... 00:04:39.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:39.022 16:49:57 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:39.022 16:49:57 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57645 /var/tmp/spdk_tgt.sock 00:04:39.022 16:49:57 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57645 ']' 00:04:39.022 16:49:57 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:39.022 16:49:57 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.022 16:49:57 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:39.022 16:49:57 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.022 16:49:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:39.022 [2024-11-20 16:49:57.522762] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:39.022 [2024-11-20 16:49:57.523080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57645 ] 00:04:39.590 [2024-11-20 16:49:57.977873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.590 [2024-11-20 16:49:58.034961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.590 [2024-11-20 16:49:58.071358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:40.158 16:49:58 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.158 16:49:58 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:40.158 00:04:40.158 INFO: shutting down applications... 00:04:40.158 16:49:58 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:40.158 16:49:58 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:40.158 16:49:58 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:40.158 16:49:58 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:40.158 16:49:58 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:40.158 16:49:58 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57645 ]] 00:04:40.158 16:49:58 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57645 00:04:40.158 16:49:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:40.158 16:49:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:40.158 16:49:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57645 00:04:40.158 16:49:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:40.733 16:49:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:40.733 16:49:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:40.733 16:49:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57645 00:04:40.733 SPDK target shutdown done 00:04:40.733 Success 00:04:40.733 16:49:59 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:40.733 16:49:59 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:40.733 16:49:59 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:40.733 16:49:59 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:40.733 16:49:59 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:40.733 00:04:40.733 real 0m1.829s 00:04:40.733 user 0m1.794s 00:04:40.733 sys 0m0.458s 00:04:40.733 16:49:59 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.733 16:49:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:40.733 ************************************ 00:04:40.733 END TEST json_config_extra_key 00:04:40.733 ************************************ 00:04:40.733 16:49:59 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:40.733 16:49:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.733 16:49:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.733 16:49:59 -- common/autotest_common.sh@10 -- # set +x 00:04:40.733 ************************************ 00:04:40.733 START TEST alias_rpc 00:04:40.733 ************************************ 00:04:40.733 16:49:59 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:40.733 * Looking for test storage... 00:04:40.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:40.733 16:49:59 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:40.733 16:49:59 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:40.733 16:49:59 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:40.994 16:49:59 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:40.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.994 16:49:59 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:40.994 16:49:59 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.994 16:49:59 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:40.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.995 --rc genhtml_branch_coverage=1 00:04:40.995 --rc genhtml_function_coverage=1 00:04:40.995 --rc genhtml_legend=1 00:04:40.995 --rc geninfo_all_blocks=1 00:04:40.995 --rc geninfo_unexecuted_blocks=1 00:04:40.995 00:04:40.995 ' 00:04:40.995 16:49:59 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:40.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.995 --rc genhtml_branch_coverage=1 00:04:40.995 --rc genhtml_function_coverage=1 00:04:40.995 --rc genhtml_legend=1 00:04:40.995 --rc geninfo_all_blocks=1 00:04:40.995 --rc geninfo_unexecuted_blocks=1 00:04:40.995 00:04:40.995 ' 00:04:40.995 16:49:59 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:40.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.995 --rc genhtml_branch_coverage=1 00:04:40.995 --rc genhtml_function_coverage=1 00:04:40.995 --rc genhtml_legend=1 00:04:40.995 --rc geninfo_all_blocks=1 00:04:40.995 --rc geninfo_unexecuted_blocks=1 00:04:40.995 00:04:40.995 ' 00:04:40.995 16:49:59 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:40.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.995 --rc genhtml_branch_coverage=1 00:04:40.995 --rc genhtml_function_coverage=1 00:04:40.995 --rc genhtml_legend=1 00:04:40.995 --rc geninfo_all_blocks=1 00:04:40.995 --rc geninfo_unexecuted_blocks=1 00:04:40.995 00:04:40.995 ' 00:04:40.995 16:49:59 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:40.995 16:49:59 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57723 00:04:40.995 16:49:59 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57723 00:04:40.995 16:49:59 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.995 16:49:59 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57723 ']' 00:04:40.995 16:49:59 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.995 16:49:59 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.995 16:49:59 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.995 16:49:59 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.995 16:49:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.995 [2024-11-20 16:49:59.393513] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:40.995 [2024-11-20 16:49:59.394193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57723 ] 00:04:41.254 [2024-11-20 16:49:59.542765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.254 [2024-11-20 16:49:59.599897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.254 [2024-11-20 16:49:59.668559] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:42.190 16:50:00 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.190 16:50:00 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:42.190 16:50:00 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:42.190 16:50:00 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57723 00:04:42.190 16:50:00 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57723 ']' 00:04:42.190 16:50:00 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57723 00:04:42.190 16:50:00 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:42.190 16:50:00 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.190 16:50:00 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57723 00:04:42.190 killing process with pid 57723 00:04:42.190 16:50:00 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.190 16:50:00 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.190 16:50:00 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57723' 00:04:42.191 16:50:00 alias_rpc -- common/autotest_common.sh@973 -- # kill 57723 00:04:42.191 16:50:00 alias_rpc -- common/autotest_common.sh@978 -- # wait 57723 00:04:42.758 00:04:42.758 real 0m1.948s 00:04:42.758 user 0m2.216s 00:04:42.758 sys 0m0.448s 00:04:42.758 16:50:01 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.758 16:50:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.758 ************************************ 00:04:42.758 END TEST alias_rpc 00:04:42.758 ************************************ 00:04:42.758 16:50:01 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:42.758 16:50:01 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:42.758 16:50:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.758 16:50:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.758 16:50:01 -- common/autotest_common.sh@10 -- # set +x 00:04:42.758 ************************************ 00:04:42.758 START TEST spdkcli_tcp 00:04:42.758 ************************************ 00:04:42.758 16:50:01 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:42.758 * Looking for test storage... 00:04:42.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:42.758 16:50:01 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:42.758 16:50:01 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:42.758 16:50:01 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:43.019 16:50:01 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.019 16:50:01 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:43.019 16:50:01 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.019 16:50:01 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:43.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.019 --rc genhtml_branch_coverage=1 00:04:43.019 --rc genhtml_function_coverage=1 00:04:43.019 --rc genhtml_legend=1 00:04:43.019 --rc geninfo_all_blocks=1 00:04:43.019 --rc geninfo_unexecuted_blocks=1 00:04:43.019 00:04:43.019 ' 00:04:43.019 16:50:01 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:43.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.019 --rc genhtml_branch_coverage=1 00:04:43.019 --rc genhtml_function_coverage=1 00:04:43.019 --rc genhtml_legend=1 00:04:43.019 --rc geninfo_all_blocks=1 00:04:43.019 --rc geninfo_unexecuted_blocks=1 00:04:43.019 00:04:43.019 ' 00:04:43.019 16:50:01 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:43.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.019 --rc genhtml_branch_coverage=1 00:04:43.019 --rc genhtml_function_coverage=1 00:04:43.019 --rc genhtml_legend=1 00:04:43.019 --rc geninfo_all_blocks=1 00:04:43.019 --rc geninfo_unexecuted_blocks=1 00:04:43.019 00:04:43.019 ' 00:04:43.019 16:50:01 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:43.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.019 --rc genhtml_branch_coverage=1 00:04:43.019 --rc genhtml_function_coverage=1 00:04:43.019 --rc genhtml_legend=1 00:04:43.019 --rc geninfo_all_blocks=1 00:04:43.019 --rc geninfo_unexecuted_blocks=1 00:04:43.019 00:04:43.019 ' 00:04:43.019 16:50:01 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:43.019 16:50:01 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:43.019 16:50:01 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:43.019 16:50:01 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:43.019 16:50:01 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:43.019 16:50:01 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:43.019 16:50:01 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:43.019 16:50:01 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:43.019 16:50:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:43.019 16:50:01 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57807 00:04:43.019 16:50:01 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57807 00:04:43.019 16:50:01 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:43.019 16:50:01 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57807 ']' 00:04:43.019 16:50:01 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.019 16:50:01 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.019 16:50:01 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.019 16:50:01 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.019 16:50:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:43.019 [2024-11-20 16:50:01.416411] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:43.019 [2024-11-20 16:50:01.416745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57807 ] 00:04:43.278 [2024-11-20 16:50:01.561929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:43.278 [2024-11-20 16:50:01.625875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.278 [2024-11-20 16:50:01.625896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.278 [2024-11-20 16:50:01.699030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:43.537 16:50:01 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.537 16:50:01 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:43.537 16:50:01 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57822 00:04:43.537 16:50:01 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:43.537 16:50:01 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:43.796 [ 00:04:43.796 "bdev_malloc_delete", 00:04:43.796 "bdev_malloc_create", 00:04:43.796 "bdev_null_resize", 00:04:43.796 "bdev_null_delete", 00:04:43.796 "bdev_null_create", 00:04:43.796 "bdev_nvme_cuse_unregister", 00:04:43.796 "bdev_nvme_cuse_register", 00:04:43.796 "bdev_opal_new_user", 00:04:43.796 "bdev_opal_set_lock_state", 00:04:43.796 "bdev_opal_delete", 00:04:43.796 "bdev_opal_get_info", 00:04:43.796 "bdev_opal_create", 00:04:43.796 "bdev_nvme_opal_revert", 00:04:43.796 "bdev_nvme_opal_init", 00:04:43.796 "bdev_nvme_send_cmd", 00:04:43.796 "bdev_nvme_set_keys", 00:04:43.796 "bdev_nvme_get_path_iostat", 00:04:43.796 "bdev_nvme_get_mdns_discovery_info", 00:04:43.796 "bdev_nvme_stop_mdns_discovery", 00:04:43.797 "bdev_nvme_start_mdns_discovery", 00:04:43.797 "bdev_nvme_set_multipath_policy", 00:04:43.797 "bdev_nvme_set_preferred_path", 00:04:43.797 "bdev_nvme_get_io_paths", 00:04:43.797 "bdev_nvme_remove_error_injection", 00:04:43.797 "bdev_nvme_add_error_injection", 00:04:43.797 "bdev_nvme_get_discovery_info", 00:04:43.797 "bdev_nvme_stop_discovery", 00:04:43.797 "bdev_nvme_start_discovery", 00:04:43.797 "bdev_nvme_get_controller_health_info", 00:04:43.797 "bdev_nvme_disable_controller", 00:04:43.797 "bdev_nvme_enable_controller", 00:04:43.797 "bdev_nvme_reset_controller", 00:04:43.797 "bdev_nvme_get_transport_statistics", 00:04:43.797 "bdev_nvme_apply_firmware", 00:04:43.797 "bdev_nvme_detach_controller", 00:04:43.797 "bdev_nvme_get_controllers", 00:04:43.797 "bdev_nvme_attach_controller", 00:04:43.797 "bdev_nvme_set_hotplug", 00:04:43.797 "bdev_nvme_set_options", 00:04:43.797 "bdev_passthru_delete", 00:04:43.797 "bdev_passthru_create", 00:04:43.797 "bdev_lvol_set_parent_bdev", 00:04:43.797 "bdev_lvol_set_parent", 00:04:43.797 "bdev_lvol_check_shallow_copy", 00:04:43.797 "bdev_lvol_start_shallow_copy", 00:04:43.797 "bdev_lvol_grow_lvstore", 00:04:43.797 "bdev_lvol_get_lvols", 00:04:43.797 "bdev_lvol_get_lvstores", 00:04:43.797 "bdev_lvol_delete", 00:04:43.797 "bdev_lvol_set_read_only", 00:04:43.797 "bdev_lvol_resize", 00:04:43.797 "bdev_lvol_decouple_parent", 00:04:43.797 "bdev_lvol_inflate", 00:04:43.797 "bdev_lvol_rename", 00:04:43.797 "bdev_lvol_clone_bdev", 00:04:43.797 "bdev_lvol_clone", 00:04:43.797 "bdev_lvol_snapshot", 00:04:43.797 "bdev_lvol_create", 00:04:43.797 "bdev_lvol_delete_lvstore", 00:04:43.797 "bdev_lvol_rename_lvstore", 00:04:43.797 "bdev_lvol_create_lvstore", 00:04:43.797 "bdev_raid_set_options", 00:04:43.797 "bdev_raid_remove_base_bdev", 00:04:43.797 "bdev_raid_add_base_bdev", 00:04:43.797 "bdev_raid_delete", 00:04:43.797 "bdev_raid_create", 00:04:43.797 "bdev_raid_get_bdevs", 00:04:43.797 "bdev_error_inject_error", 00:04:43.797 "bdev_error_delete", 00:04:43.797 "bdev_error_create", 00:04:43.797 "bdev_split_delete", 00:04:43.797 "bdev_split_create", 00:04:43.797 "bdev_delay_delete", 00:04:43.797 "bdev_delay_create", 00:04:43.797 "bdev_delay_update_latency", 00:04:43.797 "bdev_zone_block_delete", 00:04:43.797 "bdev_zone_block_create", 00:04:43.797 "blobfs_create", 00:04:43.797 "blobfs_detect", 00:04:43.797 "blobfs_set_cache_size", 00:04:43.797 "bdev_aio_delete", 00:04:43.797 "bdev_aio_rescan", 00:04:43.797 "bdev_aio_create", 00:04:43.797 "bdev_ftl_set_property", 00:04:43.797 "bdev_ftl_get_properties", 00:04:43.797 "bdev_ftl_get_stats", 00:04:43.797 "bdev_ftl_unmap", 00:04:43.797 "bdev_ftl_unload", 00:04:43.797 "bdev_ftl_delete", 00:04:43.797 "bdev_ftl_load", 00:04:43.797 "bdev_ftl_create", 00:04:43.797 "bdev_virtio_attach_controller", 00:04:43.797 "bdev_virtio_scsi_get_devices", 00:04:43.797 "bdev_virtio_detach_controller", 00:04:43.797 "bdev_virtio_blk_set_hotplug", 00:04:43.797 "bdev_iscsi_delete", 00:04:43.797 "bdev_iscsi_create", 00:04:43.797 "bdev_iscsi_set_options", 00:04:43.797 "bdev_uring_delete", 00:04:43.797 "bdev_uring_rescan", 00:04:43.797 "bdev_uring_create", 00:04:43.797 "accel_error_inject_error", 00:04:43.797 "ioat_scan_accel_module", 00:04:43.797 "dsa_scan_accel_module", 00:04:43.797 "iaa_scan_accel_module", 00:04:43.797 "keyring_file_remove_key", 00:04:43.797 "keyring_file_add_key", 00:04:43.797 "keyring_linux_set_options", 00:04:43.797 "fsdev_aio_delete", 00:04:43.797 "fsdev_aio_create", 00:04:43.797 "iscsi_get_histogram", 00:04:43.797 "iscsi_enable_histogram", 00:04:43.797 "iscsi_set_options", 00:04:43.797 "iscsi_get_auth_groups", 00:04:43.797 "iscsi_auth_group_remove_secret", 00:04:43.797 "iscsi_auth_group_add_secret", 00:04:43.797 "iscsi_delete_auth_group", 00:04:43.797 "iscsi_create_auth_group", 00:04:43.797 "iscsi_set_discovery_auth", 00:04:43.797 "iscsi_get_options", 00:04:43.797 "iscsi_target_node_request_logout", 00:04:43.797 "iscsi_target_node_set_redirect", 00:04:43.797 "iscsi_target_node_set_auth", 00:04:43.797 "iscsi_target_node_add_lun", 00:04:43.797 "iscsi_get_stats", 00:04:43.797 "iscsi_get_connections", 00:04:43.797 "iscsi_portal_group_set_auth", 00:04:43.797 "iscsi_start_portal_group", 00:04:43.797 "iscsi_delete_portal_group", 00:04:43.797 "iscsi_create_portal_group", 00:04:43.797 "iscsi_get_portal_groups", 00:04:43.797 "iscsi_delete_target_node", 00:04:43.797 "iscsi_target_node_remove_pg_ig_maps", 00:04:43.797 "iscsi_target_node_add_pg_ig_maps", 00:04:43.797 "iscsi_create_target_node", 00:04:43.797 "iscsi_get_target_nodes", 00:04:43.797 "iscsi_delete_initiator_group", 00:04:43.797 "iscsi_initiator_group_remove_initiators", 00:04:43.797 "iscsi_initiator_group_add_initiators", 00:04:43.797 "iscsi_create_initiator_group", 00:04:43.797 "iscsi_get_initiator_groups", 00:04:43.797 "nvmf_set_crdt", 00:04:43.797 "nvmf_set_config", 00:04:43.797 "nvmf_set_max_subsystems", 00:04:43.797 "nvmf_stop_mdns_prr", 00:04:43.797 "nvmf_publish_mdns_prr", 00:04:43.797 "nvmf_subsystem_get_listeners", 00:04:43.797 "nvmf_subsystem_get_qpairs", 00:04:43.797 "nvmf_subsystem_get_controllers", 00:04:43.797 "nvmf_get_stats", 00:04:43.797 "nvmf_get_transports", 00:04:43.797 "nvmf_create_transport", 00:04:43.797 "nvmf_get_targets", 00:04:43.797 "nvmf_delete_target", 00:04:43.797 "nvmf_create_target", 00:04:43.797 "nvmf_subsystem_allow_any_host", 00:04:43.797 "nvmf_subsystem_set_keys", 00:04:43.797 "nvmf_subsystem_remove_host", 00:04:43.797 "nvmf_subsystem_add_host", 00:04:43.797 "nvmf_ns_remove_host", 00:04:43.797 "nvmf_ns_add_host", 00:04:43.797 "nvmf_subsystem_remove_ns", 00:04:43.797 "nvmf_subsystem_set_ns_ana_group", 00:04:43.797 "nvmf_subsystem_add_ns", 00:04:43.797 "nvmf_subsystem_listener_set_ana_state", 00:04:43.797 "nvmf_discovery_get_referrals", 00:04:43.797 "nvmf_discovery_remove_referral", 00:04:43.797 "nvmf_discovery_add_referral", 00:04:43.797 "nvmf_subsystem_remove_listener", 00:04:43.797 "nvmf_subsystem_add_listener", 00:04:43.797 "nvmf_delete_subsystem", 00:04:43.797 "nvmf_create_subsystem", 00:04:43.797 "nvmf_get_subsystems", 00:04:43.797 "env_dpdk_get_mem_stats", 00:04:43.797 "nbd_get_disks", 00:04:43.797 "nbd_stop_disk", 00:04:43.797 "nbd_start_disk", 00:04:43.797 "ublk_recover_disk", 00:04:43.797 "ublk_get_disks", 00:04:43.797 "ublk_stop_disk", 00:04:43.797 "ublk_start_disk", 00:04:43.797 "ublk_destroy_target", 00:04:43.797 "ublk_create_target", 00:04:43.797 "virtio_blk_create_transport", 00:04:43.797 "virtio_blk_get_transports", 00:04:43.797 "vhost_controller_set_coalescing", 00:04:43.797 "vhost_get_controllers", 00:04:43.797 "vhost_delete_controller", 00:04:43.797 "vhost_create_blk_controller", 00:04:43.797 "vhost_scsi_controller_remove_target", 00:04:43.797 "vhost_scsi_controller_add_target", 00:04:43.797 "vhost_start_scsi_controller", 00:04:43.797 "vhost_create_scsi_controller", 00:04:43.797 "thread_set_cpumask", 00:04:43.797 "scheduler_set_options", 00:04:43.797 "framework_get_governor", 00:04:43.797 "framework_get_scheduler", 00:04:43.797 "framework_set_scheduler", 00:04:43.797 "framework_get_reactors", 00:04:43.797 "thread_get_io_channels", 00:04:43.797 "thread_get_pollers", 00:04:43.797 "thread_get_stats", 00:04:43.797 "framework_monitor_context_switch", 00:04:43.797 "spdk_kill_instance", 00:04:43.797 "log_enable_timestamps", 00:04:43.797 "log_get_flags", 00:04:43.797 "log_clear_flag", 00:04:43.797 "log_set_flag", 00:04:43.797 "log_get_level", 00:04:43.797 "log_set_level", 00:04:43.797 "log_get_print_level", 00:04:43.797 "log_set_print_level", 00:04:43.797 "framework_enable_cpumask_locks", 00:04:43.797 "framework_disable_cpumask_locks", 00:04:43.797 "framework_wait_init", 00:04:43.797 "framework_start_init", 00:04:43.797 "scsi_get_devices", 00:04:43.797 "bdev_get_histogram", 00:04:43.797 "bdev_enable_histogram", 00:04:43.797 "bdev_set_qos_limit", 00:04:43.797 "bdev_set_qd_sampling_period", 00:04:43.797 "bdev_get_bdevs", 00:04:43.797 "bdev_reset_iostat", 00:04:43.797 "bdev_get_iostat", 00:04:43.797 "bdev_examine", 00:04:43.797 "bdev_wait_for_examine", 00:04:43.797 "bdev_set_options", 00:04:43.797 "accel_get_stats", 00:04:43.797 "accel_set_options", 00:04:43.797 "accel_set_driver", 00:04:43.797 "accel_crypto_key_destroy", 00:04:43.797 "accel_crypto_keys_get", 00:04:43.797 "accel_crypto_key_create", 00:04:43.797 "accel_assign_opc", 00:04:43.797 "accel_get_module_info", 00:04:43.797 "accel_get_opc_assignments", 00:04:43.797 "vmd_rescan", 00:04:43.797 "vmd_remove_device", 00:04:43.797 "vmd_enable", 00:04:43.797 "sock_get_default_impl", 00:04:43.797 "sock_set_default_impl", 00:04:43.797 "sock_impl_set_options", 00:04:43.797 "sock_impl_get_options", 00:04:43.797 "iobuf_get_stats", 00:04:43.797 "iobuf_set_options", 00:04:43.797 "keyring_get_keys", 00:04:43.797 "framework_get_pci_devices", 00:04:43.797 "framework_get_config", 00:04:43.797 "framework_get_subsystems", 00:04:43.797 "fsdev_set_opts", 00:04:43.797 "fsdev_get_opts", 00:04:43.797 "trace_get_info", 00:04:43.797 "trace_get_tpoint_group_mask", 00:04:43.797 "trace_disable_tpoint_group", 00:04:43.797 "trace_enable_tpoint_group", 00:04:43.797 "trace_clear_tpoint_mask", 00:04:43.797 "trace_set_tpoint_mask", 00:04:43.797 "notify_get_notifications", 00:04:43.797 "notify_get_types", 00:04:43.798 "spdk_get_version", 00:04:43.798 "rpc_get_methods" 00:04:43.798 ] 00:04:43.798 16:50:02 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:43.798 16:50:02 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:43.798 16:50:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:43.798 16:50:02 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:43.798 16:50:02 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57807 00:04:43.798 16:50:02 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57807 ']' 00:04:43.798 16:50:02 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57807 00:04:43.798 16:50:02 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:43.798 16:50:02 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.798 16:50:02 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57807 00:04:43.798 16:50:02 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.798 killing process with pid 57807 00:04:43.798 16:50:02 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.798 16:50:02 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57807' 00:04:43.798 16:50:02 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57807 00:04:43.798 16:50:02 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57807 00:04:44.374 ************************************ 00:04:44.374 END TEST spdkcli_tcp 00:04:44.375 ************************************ 00:04:44.375 00:04:44.375 real 0m1.508s 00:04:44.375 user 0m2.522s 00:04:44.375 sys 0m0.475s 00:04:44.375 16:50:02 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.375 16:50:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:44.375 16:50:02 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:44.375 16:50:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.375 16:50:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.375 16:50:02 -- common/autotest_common.sh@10 -- # set +x 00:04:44.375 ************************************ 00:04:44.375 START TEST dpdk_mem_utility 00:04:44.375 ************************************ 00:04:44.375 16:50:02 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:44.375 * Looking for test storage... 00:04:44.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:44.375 16:50:02 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.375 16:50:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.375 16:50:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:44.375 16:50:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.375 16:50:02 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:44.375 16:50:02 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.375 16:50:02 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:44.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.375 --rc genhtml_branch_coverage=1 00:04:44.375 --rc genhtml_function_coverage=1 00:04:44.375 --rc genhtml_legend=1 00:04:44.375 --rc geninfo_all_blocks=1 00:04:44.375 --rc geninfo_unexecuted_blocks=1 00:04:44.375 00:04:44.375 ' 00:04:44.375 16:50:02 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:44.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.375 --rc genhtml_branch_coverage=1 00:04:44.375 --rc genhtml_function_coverage=1 00:04:44.375 --rc genhtml_legend=1 00:04:44.375 --rc geninfo_all_blocks=1 00:04:44.375 --rc geninfo_unexecuted_blocks=1 00:04:44.375 00:04:44.375 ' 00:04:44.375 16:50:02 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:44.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.375 --rc genhtml_branch_coverage=1 00:04:44.375 --rc genhtml_function_coverage=1 00:04:44.375 --rc genhtml_legend=1 00:04:44.375 --rc geninfo_all_blocks=1 00:04:44.375 --rc geninfo_unexecuted_blocks=1 00:04:44.375 00:04:44.375 ' 00:04:44.375 16:50:02 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:44.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.375 --rc genhtml_branch_coverage=1 00:04:44.375 --rc genhtml_function_coverage=1 00:04:44.375 --rc genhtml_legend=1 00:04:44.375 --rc geninfo_all_blocks=1 00:04:44.375 --rc geninfo_unexecuted_blocks=1 00:04:44.375 00:04:44.375 ' 00:04:44.375 16:50:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:44.375 16:50:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57904 00:04:44.375 16:50:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57904 00:04:44.375 16:50:02 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57904 ']' 00:04:44.375 16:50:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.375 16:50:02 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.375 16:50:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.375 16:50:02 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.375 16:50:02 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.375 16:50:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:44.634 [2024-11-20 16:50:02.934375] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:44.634 [2024-11-20 16:50:02.934484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57904 ] 00:04:44.634 [2024-11-20 16:50:03.086480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.634 [2024-11-20 16:50:03.145995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.893 [2024-11-20 16:50:03.218620] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:45.461 16:50:03 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.461 16:50:03 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:45.461 16:50:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:45.461 16:50:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:45.461 16:50:03 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.461 16:50:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:45.461 { 00:04:45.461 "filename": "/tmp/spdk_mem_dump.txt" 00:04:45.461 } 00:04:45.461 16:50:03 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.461 16:50:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:45.461 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:45.461 1 heaps totaling size 818.000000 MiB 00:04:45.461 size: 818.000000 MiB heap id: 0 00:04:45.461 end heaps---------- 00:04:45.461 9 mempools totaling size 603.782043 MiB 00:04:45.461 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:45.461 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:45.461 size: 100.555481 MiB name: bdev_io_57904 00:04:45.461 size: 50.003479 MiB name: msgpool_57904 00:04:45.461 size: 36.509338 MiB name: fsdev_io_57904 00:04:45.461 size: 21.763794 MiB name: PDU_Pool 00:04:45.461 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:45.461 size: 4.133484 MiB name: evtpool_57904 00:04:45.461 size: 0.026123 MiB name: Session_Pool 00:04:45.461 end mempools------- 00:04:45.461 6 memzones totaling size 4.142822 MiB 00:04:45.461 size: 1.000366 MiB name: RG_ring_0_57904 00:04:45.461 size: 1.000366 MiB name: RG_ring_1_57904 00:04:45.461 size: 1.000366 MiB name: RG_ring_4_57904 00:04:45.461 size: 1.000366 MiB name: RG_ring_5_57904 00:04:45.461 size: 0.125366 MiB name: RG_ring_2_57904 00:04:45.461 size: 0.015991 MiB name: RG_ring_3_57904 00:04:45.462 end memzones------- 00:04:45.462 16:50:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:45.722 heap id: 0 total size: 818.000000 MiB number of busy elements: 313 number of free elements: 15 00:04:45.722 list of free elements. size: 10.803223 MiB 00:04:45.722 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:45.722 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:45.722 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:45.722 element at address: 0x200000400000 with size: 0.993958 MiB 00:04:45.722 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:45.722 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:45.722 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:45.722 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:45.722 element at address: 0x20001ae00000 with size: 0.568420 MiB 00:04:45.722 element at address: 0x20000a600000 with size: 0.488892 MiB 00:04:45.722 element at address: 0x200000c00000 with size: 0.486267 MiB 00:04:45.722 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:45.722 element at address: 0x200003e00000 with size: 0.480286 MiB 00:04:45.722 element at address: 0x200028200000 with size: 0.395752 MiB 00:04:45.723 element at address: 0x200000800000 with size: 0.351746 MiB 00:04:45.723 list of standard malloc elements. size: 199.267883 MiB 00:04:45.723 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:45.723 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:45.723 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:45.723 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:45.723 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:45.723 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:45.723 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:45.723 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:45.723 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:45.723 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:45.723 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x20000085e580 with size: 0.000183 MiB 00:04:45.723 element at address: 0x20000087e840 with size: 0.000183 MiB 00:04:45.723 element at address: 0x20000087e900 with size: 0.000183 MiB 00:04:45.723 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:04:45.723 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:04:45.723 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:04:45.723 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:04:45.723 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:04:45.723 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:04:45.723 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x20000087f080 with size: 0.000183 MiB 00:04:45.723 element at address: 0x20000087f140 with size: 0.000183 MiB 00:04:45.723 element at address: 0x20000087f200 with size: 0.000183 MiB 00:04:45.723 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x20000087f380 with size: 0.000183 MiB 00:04:45.723 element at address: 0x20000087f440 with size: 0.000183 MiB 00:04:45.723 element at address: 0x20000087f500 with size: 0.000183 MiB 00:04:45.723 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:45.723 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:04:45.723 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:04:45.724 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:45.724 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:45.724 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:04:45.724 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:04:45.724 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:04:45.724 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:04:45.724 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:04:45.724 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:04:45.724 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:04:45.724 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:45.724 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:45.724 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:45.724 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:45.724 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:45.724 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:45.724 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:45.725 element at address: 0x200028265500 with size: 0.000183 MiB 00:04:45.725 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826c480 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826c540 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826c600 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826c780 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826c840 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826c900 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826d080 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826d140 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826d200 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826d380 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826d440 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826d500 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826d680 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826d740 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826d800 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826d980 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826da40 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826db00 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826de00 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826df80 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826e040 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826e100 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826e280 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826e340 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826e400 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826e580 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826e640 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826e700 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826e880 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826e940 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826f000 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826f180 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826f240 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826f300 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826f480 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826f540 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826f600 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826f780 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826f840 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826f900 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:45.725 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:45.725 list of memzone associated elements. size: 607.928894 MiB 00:04:45.725 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:45.725 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:45.725 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:45.725 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:45.725 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:45.725 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57904_0 00:04:45.725 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:45.725 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57904_0 00:04:45.725 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:45.725 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57904_0 00:04:45.725 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:45.725 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:45.725 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:45.725 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:45.725 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:45.725 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57904_0 00:04:45.725 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:45.725 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57904 00:04:45.725 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:45.725 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57904 00:04:45.725 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:45.725 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:45.725 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:45.725 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:45.725 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:45.725 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:45.725 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:45.725 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:45.725 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:45.725 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57904 00:04:45.725 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:45.725 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57904 00:04:45.725 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:45.725 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57904 00:04:45.725 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:45.725 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57904 00:04:45.725 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:45.725 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57904 00:04:45.725 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:45.725 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57904 00:04:45.725 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:45.725 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:45.725 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:45.725 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:45.726 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:45.726 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:45.726 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:45.726 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57904 00:04:45.726 element at address: 0x20000085e640 with size: 0.125488 MiB 00:04:45.726 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57904 00:04:45.726 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:45.726 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:45.726 element at address: 0x200028265680 with size: 0.023743 MiB 00:04:45.726 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:45.726 element at address: 0x20000085a380 with size: 0.016113 MiB 00:04:45.726 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57904 00:04:45.726 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:04:45.726 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:45.726 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:04:45.726 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57904 00:04:45.726 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:45.726 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57904 00:04:45.726 element at address: 0x20000085a180 with size: 0.000305 MiB 00:04:45.726 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57904 00:04:45.726 element at address: 0x20002826c280 with size: 0.000305 MiB 00:04:45.726 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:45.726 16:50:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:45.726 16:50:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57904 00:04:45.726 16:50:04 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57904 ']' 00:04:45.726 16:50:04 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57904 00:04:45.726 16:50:04 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:45.726 16:50:04 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.726 16:50:04 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57904 00:04:45.726 16:50:04 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.726 killing process with pid 57904 00:04:45.726 16:50:04 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.726 16:50:04 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57904' 00:04:45.726 16:50:04 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57904 00:04:45.726 16:50:04 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57904 00:04:45.984 00:04:45.984 real 0m1.738s 00:04:45.984 user 0m1.861s 00:04:45.984 sys 0m0.437s 00:04:45.984 16:50:04 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.984 ************************************ 00:04:45.984 END TEST dpdk_mem_utility 00:04:45.984 ************************************ 00:04:45.984 16:50:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:45.984 16:50:04 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:45.984 16:50:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.984 16:50:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.984 16:50:04 -- common/autotest_common.sh@10 -- # set +x 00:04:45.984 ************************************ 00:04:45.984 START TEST event 00:04:45.984 ************************************ 00:04:45.984 16:50:04 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:46.243 * Looking for test storage... 00:04:46.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:46.243 16:50:04 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:46.243 16:50:04 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:46.243 16:50:04 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:46.243 16:50:04 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:46.243 16:50:04 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.243 16:50:04 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.243 16:50:04 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.243 16:50:04 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.243 16:50:04 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.243 16:50:04 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.243 16:50:04 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.243 16:50:04 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.243 16:50:04 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.243 16:50:04 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.243 16:50:04 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.243 16:50:04 event -- scripts/common.sh@344 -- # case "$op" in 00:04:46.243 16:50:04 event -- scripts/common.sh@345 -- # : 1 00:04:46.243 16:50:04 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.243 16:50:04 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.243 16:50:04 event -- scripts/common.sh@365 -- # decimal 1 00:04:46.243 16:50:04 event -- scripts/common.sh@353 -- # local d=1 00:04:46.243 16:50:04 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.243 16:50:04 event -- scripts/common.sh@355 -- # echo 1 00:04:46.243 16:50:04 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.243 16:50:04 event -- scripts/common.sh@366 -- # decimal 2 00:04:46.243 16:50:04 event -- scripts/common.sh@353 -- # local d=2 00:04:46.243 16:50:04 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.243 16:50:04 event -- scripts/common.sh@355 -- # echo 2 00:04:46.243 16:50:04 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.243 16:50:04 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.243 16:50:04 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.243 16:50:04 event -- scripts/common.sh@368 -- # return 0 00:04:46.243 16:50:04 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.243 16:50:04 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:46.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.243 --rc genhtml_branch_coverage=1 00:04:46.243 --rc genhtml_function_coverage=1 00:04:46.243 --rc genhtml_legend=1 00:04:46.243 --rc geninfo_all_blocks=1 00:04:46.243 --rc geninfo_unexecuted_blocks=1 00:04:46.243 00:04:46.243 ' 00:04:46.243 16:50:04 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:46.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.243 --rc genhtml_branch_coverage=1 00:04:46.243 --rc genhtml_function_coverage=1 00:04:46.243 --rc genhtml_legend=1 00:04:46.243 --rc geninfo_all_blocks=1 00:04:46.243 --rc geninfo_unexecuted_blocks=1 00:04:46.243 00:04:46.243 ' 00:04:46.243 16:50:04 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:46.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.243 --rc genhtml_branch_coverage=1 00:04:46.243 --rc genhtml_function_coverage=1 00:04:46.243 --rc genhtml_legend=1 00:04:46.243 --rc geninfo_all_blocks=1 00:04:46.243 --rc geninfo_unexecuted_blocks=1 00:04:46.243 00:04:46.243 ' 00:04:46.243 16:50:04 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:46.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.243 --rc genhtml_branch_coverage=1 00:04:46.243 --rc genhtml_function_coverage=1 00:04:46.243 --rc genhtml_legend=1 00:04:46.243 --rc geninfo_all_blocks=1 00:04:46.243 --rc geninfo_unexecuted_blocks=1 00:04:46.243 00:04:46.243 ' 00:04:46.243 16:50:04 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:46.243 16:50:04 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:46.243 16:50:04 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:46.243 16:50:04 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:46.243 16:50:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.243 16:50:04 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.243 ************************************ 00:04:46.243 START TEST event_perf 00:04:46.243 ************************************ 00:04:46.243 16:50:04 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:46.243 Running I/O for 1 seconds...[2024-11-20 16:50:04.689708] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:46.243 [2024-11-20 16:50:04.689802] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57978 ] 00:04:46.501 [2024-11-20 16:50:04.837100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:46.501 [2024-11-20 16:50:04.901286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.501 [2024-11-20 16:50:04.901379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:46.501 Running I/O for 1 seconds...[2024-11-20 16:50:04.901512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:46.501 [2024-11-20 16:50:04.901515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.438 00:04:47.438 lcore 0: 199062 00:04:47.438 lcore 1: 199062 00:04:47.438 lcore 2: 199062 00:04:47.438 lcore 3: 199062 00:04:47.438 done. 00:04:47.438 00:04:47.438 real 0m1.281s 00:04:47.438 user 0m4.101s 00:04:47.438 sys 0m0.056s 00:04:47.438 16:50:05 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.438 16:50:05 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:47.438 ************************************ 00:04:47.438 END TEST event_perf 00:04:47.438 ************************************ 00:04:47.696 16:50:05 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:47.696 16:50:05 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:47.696 16:50:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.696 16:50:05 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.696 ************************************ 00:04:47.696 START TEST event_reactor 00:04:47.696 ************************************ 00:04:47.696 16:50:05 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:47.696 [2024-11-20 16:50:06.016184] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:47.696 [2024-11-20 16:50:06.016281] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58022 ] 00:04:47.696 [2024-11-20 16:50:06.161568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.696 [2024-11-20 16:50:06.211527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.080 test_start 00:04:49.080 oneshot 00:04:49.080 tick 100 00:04:49.080 tick 100 00:04:49.080 tick 250 00:04:49.080 tick 100 00:04:49.080 tick 100 00:04:49.080 tick 100 00:04:49.080 tick 250 00:04:49.080 tick 500 00:04:49.080 tick 100 00:04:49.080 tick 100 00:04:49.080 tick 250 00:04:49.080 tick 100 00:04:49.080 tick 100 00:04:49.080 test_end 00:04:49.080 00:04:49.080 real 0m1.260s 00:04:49.080 user 0m1.113s 00:04:49.080 sys 0m0.042s 00:04:49.080 16:50:07 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.080 ************************************ 00:04:49.080 END TEST event_reactor 00:04:49.080 16:50:07 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:49.080 ************************************ 00:04:49.080 16:50:07 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:49.080 16:50:07 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:49.080 16:50:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.080 16:50:07 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.080 ************************************ 00:04:49.080 START TEST event_reactor_perf 00:04:49.080 ************************************ 00:04:49.080 16:50:07 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:49.080 [2024-11-20 16:50:07.331583] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:49.080 [2024-11-20 16:50:07.331857] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58052 ] 00:04:49.080 [2024-11-20 16:50:07.478317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.080 [2024-11-20 16:50:07.528362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.459 test_start 00:04:50.459 test_end 00:04:50.459 Performance: 383959 events per second 00:04:50.459 00:04:50.459 real 0m1.266s 00:04:50.459 user 0m1.115s 00:04:50.459 sys 0m0.045s 00:04:50.459 16:50:08 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.459 ************************************ 00:04:50.459 END TEST event_reactor_perf 00:04:50.459 ************************************ 00:04:50.459 16:50:08 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:50.459 16:50:08 event -- event/event.sh@49 -- # uname -s 00:04:50.459 16:50:08 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:50.459 16:50:08 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:50.459 16:50:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.459 16:50:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.459 16:50:08 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.459 ************************************ 00:04:50.459 START TEST event_scheduler 00:04:50.459 ************************************ 00:04:50.459 16:50:08 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:50.459 * Looking for test storage... 00:04:50.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:50.459 16:50:08 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.459 16:50:08 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.459 16:50:08 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.460 16:50:08 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.460 16:50:08 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:50.460 16:50:08 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.460 16:50:08 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.460 --rc genhtml_branch_coverage=1 00:04:50.460 --rc genhtml_function_coverage=1 00:04:50.460 --rc genhtml_legend=1 00:04:50.460 --rc geninfo_all_blocks=1 00:04:50.460 --rc geninfo_unexecuted_blocks=1 00:04:50.460 00:04:50.460 ' 00:04:50.460 16:50:08 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.460 --rc genhtml_branch_coverage=1 00:04:50.460 --rc genhtml_function_coverage=1 00:04:50.460 --rc genhtml_legend=1 00:04:50.460 --rc geninfo_all_blocks=1 00:04:50.460 --rc geninfo_unexecuted_blocks=1 00:04:50.460 00:04:50.460 ' 00:04:50.460 16:50:08 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.460 --rc genhtml_branch_coverage=1 00:04:50.460 --rc genhtml_function_coverage=1 00:04:50.460 --rc genhtml_legend=1 00:04:50.460 --rc geninfo_all_blocks=1 00:04:50.460 --rc geninfo_unexecuted_blocks=1 00:04:50.460 00:04:50.460 ' 00:04:50.460 16:50:08 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.460 --rc genhtml_branch_coverage=1 00:04:50.460 --rc genhtml_function_coverage=1 00:04:50.460 --rc genhtml_legend=1 00:04:50.460 --rc geninfo_all_blocks=1 00:04:50.460 --rc geninfo_unexecuted_blocks=1 00:04:50.460 00:04:50.460 ' 00:04:50.460 16:50:08 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:50.460 16:50:08 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:50.460 16:50:08 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58116 00:04:50.460 16:50:08 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.460 16:50:08 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58116 00:04:50.460 16:50:08 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58116 ']' 00:04:50.460 16:50:08 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.460 16:50:08 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.460 16:50:08 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.460 16:50:08 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.460 16:50:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:50.460 [2024-11-20 16:50:08.879100] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:50.460 [2024-11-20 16:50:08.879366] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58116 ] 00:04:50.719 [2024-11-20 16:50:09.028556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:50.719 [2024-11-20 16:50:09.101561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.719 [2024-11-20 16:50:09.101713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.719 [2024-11-20 16:50:09.101796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:50.719 [2024-11-20 16:50:09.101801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:51.657 16:50:09 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.657 16:50:09 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:51.657 16:50:09 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:51.657 16:50:09 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.657 16:50:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.657 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:51.657 POWER: Cannot set governor of lcore 0 to userspace 00:04:51.657 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:51.657 POWER: Cannot set governor of lcore 0 to performance 00:04:51.657 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:51.657 POWER: Cannot set governor of lcore 0 to userspace 00:04:51.657 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:51.657 POWER: Cannot set governor of lcore 0 to userspace 00:04:51.657 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:51.657 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:51.657 POWER: Unable to set Power Management Environment for lcore 0 00:04:51.657 [2024-11-20 16:50:09.896350] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:51.657 [2024-11-20 16:50:09.896364] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:51.657 [2024-11-20 16:50:09.896399] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:51.657 [2024-11-20 16:50:09.896416] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:51.657 [2024-11-20 16:50:09.896424] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:51.657 [2024-11-20 16:50:09.896431] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:51.657 16:50:09 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.657 16:50:09 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:51.657 16:50:09 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.657 16:50:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.657 [2024-11-20 16:50:09.957462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:51.657 [2024-11-20 16:50:09.995741] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:51.657 16:50:09 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.657 16:50:09 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:51.657 16:50:09 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.657 16:50:09 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.657 16:50:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.657 ************************************ 00:04:51.657 START TEST scheduler_create_thread 00:04:51.657 ************************************ 00:04:51.657 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:51.657 16:50:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:51.657 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.657 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.657 2 00:04:51.657 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.657 16:50:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:51.657 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.657 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.657 3 00:04:51.657 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.657 16:50:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:51.657 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.657 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.657 4 00:04:51.657 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.657 16:50:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:51.657 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.657 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.657 5 00:04:51.657 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.657 16:50:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:51.657 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.658 6 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.658 7 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.658 8 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.658 9 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.658 10 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.658 16:50:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.035 16:50:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.035 16:50:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:53.035 16:50:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:53.035 16:50:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.035 16:50:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.413 ************************************ 00:04:54.413 END TEST scheduler_create_thread 00:04:54.413 ************************************ 00:04:54.413 16:50:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.413 00:04:54.413 real 0m2.611s 00:04:54.413 user 0m0.012s 00:04:54.413 sys 0m0.011s 00:04:54.413 16:50:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.413 16:50:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.413 16:50:12 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:54.413 16:50:12 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58116 00:04:54.413 16:50:12 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58116 ']' 00:04:54.413 16:50:12 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58116 00:04:54.413 16:50:12 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:54.413 16:50:12 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.413 16:50:12 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58116 00:04:54.413 killing process with pid 58116 00:04:54.413 16:50:12 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:54.413 16:50:12 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:54.413 16:50:12 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58116' 00:04:54.413 16:50:12 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58116 00:04:54.413 16:50:12 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58116 00:04:54.671 [2024-11-20 16:50:13.095755] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:54.930 00:04:54.931 real 0m4.677s 00:04:54.931 user 0m8.930s 00:04:54.931 sys 0m0.378s 00:04:54.931 16:50:13 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.931 16:50:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.931 ************************************ 00:04:54.931 END TEST event_scheduler 00:04:54.931 ************************************ 00:04:54.931 16:50:13 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:54.931 16:50:13 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:54.931 16:50:13 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.931 16:50:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.931 16:50:13 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.931 ************************************ 00:04:54.931 START TEST app_repeat 00:04:54.931 ************************************ 00:04:54.931 16:50:13 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:54.931 16:50:13 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.931 16:50:13 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.931 16:50:13 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:54.931 16:50:13 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.931 16:50:13 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:54.931 16:50:13 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:54.931 16:50:13 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:54.931 Process app_repeat pid: 58221 00:04:54.931 spdk_app_start Round 0 00:04:54.931 16:50:13 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58221 00:04:54.931 16:50:13 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:54.931 16:50:13 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.931 16:50:13 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58221' 00:04:54.931 16:50:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:54.931 16:50:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:54.931 16:50:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58221 /var/tmp/spdk-nbd.sock 00:04:54.931 16:50:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58221 ']' 00:04:54.931 16:50:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:54.931 16:50:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.931 16:50:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:54.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:54.931 16:50:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.931 16:50:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:54.931 [2024-11-20 16:50:13.398543] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:04:54.931 [2024-11-20 16:50:13.398784] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58221 ] 00:04:55.189 [2024-11-20 16:50:13.547640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.189 [2024-11-20 16:50:13.605224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.189 [2024-11-20 16:50:13.605235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.189 [2024-11-20 16:50:13.658915] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:55.189 16:50:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.189 16:50:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:55.189 16:50:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.756 Malloc0 00:04:55.756 16:50:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.015 Malloc1 00:04:56.015 16:50:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.015 16:50:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.015 16:50:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.015 16:50:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:56.015 16:50:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.015 16:50:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:56.015 16:50:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.015 16:50:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.015 16:50:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.015 16:50:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:56.015 16:50:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.015 16:50:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:56.015 16:50:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:56.015 16:50:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:56.015 16:50:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.015 16:50:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:56.274 /dev/nbd0 00:04:56.274 16:50:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:56.274 16:50:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:56.274 16:50:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:56.274 16:50:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:56.274 16:50:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:56.274 16:50:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:56.274 16:50:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:56.274 16:50:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:56.274 16:50:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:56.274 16:50:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:56.274 16:50:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:56.274 1+0 records in 00:04:56.274 1+0 records out 00:04:56.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245973 s, 16.7 MB/s 00:04:56.274 16:50:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:56.274 16:50:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:56.274 16:50:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:56.274 16:50:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:56.274 16:50:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:56.274 16:50:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.274 16:50:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.274 16:50:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:56.532 /dev/nbd1 00:04:56.532 16:50:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:56.532 16:50:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:56.532 16:50:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:56.532 16:50:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:56.532 16:50:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:56.532 16:50:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:56.532 16:50:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:56.532 16:50:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:56.532 16:50:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:56.532 16:50:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:56.532 16:50:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:56.532 1+0 records in 00:04:56.532 1+0 records out 00:04:56.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216347 s, 18.9 MB/s 00:04:56.532 16:50:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:56.532 16:50:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:56.532 16:50:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:56.532 16:50:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:56.532 16:50:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:56.532 16:50:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.532 16:50:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.532 16:50:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.532 16:50:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.532 16:50:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.791 16:50:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:56.791 { 00:04:56.791 "nbd_device": "/dev/nbd0", 00:04:56.791 "bdev_name": "Malloc0" 00:04:56.791 }, 00:04:56.791 { 00:04:56.791 "nbd_device": "/dev/nbd1", 00:04:56.791 "bdev_name": "Malloc1" 00:04:56.791 } 00:04:56.791 ]' 00:04:56.791 16:50:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:56.791 { 00:04:56.791 "nbd_device": "/dev/nbd0", 00:04:56.791 "bdev_name": "Malloc0" 00:04:56.791 }, 00:04:56.791 { 00:04:56.791 "nbd_device": "/dev/nbd1", 00:04:56.791 "bdev_name": "Malloc1" 00:04:56.791 } 00:04:56.791 ]' 00:04:56.791 16:50:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.791 16:50:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:56.791 /dev/nbd1' 00:04:56.791 16:50:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.791 16:50:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:56.791 /dev/nbd1' 00:04:56.791 16:50:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:56.791 16:50:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:56.791 16:50:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:56.791 16:50:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:56.791 16:50:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:56.791 16:50:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.791 16:50:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.791 16:50:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:56.791 16:50:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:56.791 16:50:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:56.791 16:50:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:56.791 256+0 records in 00:04:56.791 256+0 records out 00:04:56.791 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00840367 s, 125 MB/s 00:04:56.791 16:50:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.791 16:50:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:56.791 256+0 records in 00:04:56.791 256+0 records out 00:04:56.791 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206479 s, 50.8 MB/s 00:04:56.791 16:50:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.791 16:50:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:56.791 256+0 records in 00:04:56.791 256+0 records out 00:04:56.791 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238084 s, 44.0 MB/s 00:04:56.791 16:50:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:56.791 16:50:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.791 16:50:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.792 16:50:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:56.792 16:50:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:56.792 16:50:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:56.792 16:50:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:56.792 16:50:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.792 16:50:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:56.792 16:50:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.792 16:50:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:57.050 16:50:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:57.050 16:50:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:57.050 16:50:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.050 16:50:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.050 16:50:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:57.050 16:50:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:57.050 16:50:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.050 16:50:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:57.308 16:50:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:57.308 16:50:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:57.308 16:50:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:57.308 16:50:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.308 16:50:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.308 16:50:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:57.308 16:50:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.308 16:50:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.308 16:50:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.308 16:50:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:57.567 16:50:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:57.567 16:50:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:57.567 16:50:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:57.567 16:50:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.567 16:50:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.567 16:50:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:57.567 16:50:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.567 16:50:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.568 16:50:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.568 16:50:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.568 16:50:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.826 16:50:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:57.826 16:50:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:57.826 16:50:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.826 16:50:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:57.826 16:50:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:57.826 16:50:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.826 16:50:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:57.826 16:50:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:57.826 16:50:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:57.826 16:50:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:57.826 16:50:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:57.826 16:50:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:57.826 16:50:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:58.085 16:50:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:58.344 [2024-11-20 16:50:16.697767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.345 [2024-11-20 16:50:16.759391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.345 [2024-11-20 16:50:16.759405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.345 [2024-11-20 16:50:16.817659] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:58.345 [2024-11-20 16:50:16.817737] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:58.345 [2024-11-20 16:50:16.817751] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:01.678 spdk_app_start Round 1 00:05:01.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:01.678 16:50:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:01.678 16:50:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:01.678 16:50:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58221 /var/tmp/spdk-nbd.sock 00:05:01.678 16:50:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58221 ']' 00:05:01.678 16:50:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:01.678 16:50:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.678 16:50:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:01.678 16:50:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.678 16:50:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:01.678 16:50:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.678 16:50:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:01.678 16:50:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.678 Malloc0 00:05:01.678 16:50:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.937 Malloc1 00:05:01.937 16:50:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.937 16:50:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.937 16:50:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.937 16:50:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:01.937 16:50:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.937 16:50:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:01.937 16:50:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.937 16:50:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.937 16:50:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.937 16:50:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:01.937 16:50:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.937 16:50:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:01.937 16:50:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:01.937 16:50:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:01.937 16:50:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.937 16:50:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:02.202 /dev/nbd0 00:05:02.202 16:50:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:02.202 16:50:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:02.202 16:50:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:02.202 16:50:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:02.202 16:50:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:02.202 16:50:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:02.202 16:50:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:02.202 16:50:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:02.202 16:50:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:02.202 16:50:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:02.202 16:50:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.202 1+0 records in 00:05:02.202 1+0 records out 00:05:02.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204155 s, 20.1 MB/s 00:05:02.202 16:50:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.202 16:50:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:02.202 16:50:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.202 16:50:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:02.202 16:50:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:02.202 16:50:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.202 16:50:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.202 16:50:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:02.461 /dev/nbd1 00:05:02.461 16:50:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:02.461 16:50:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:02.461 16:50:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:02.461 16:50:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:02.461 16:50:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:02.461 16:50:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:02.461 16:50:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:02.461 16:50:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:02.461 16:50:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:02.461 16:50:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:02.461 16:50:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.461 1+0 records in 00:05:02.461 1+0 records out 00:05:02.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232638 s, 17.6 MB/s 00:05:02.461 16:50:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.461 16:50:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:02.461 16:50:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.461 16:50:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:02.461 16:50:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:02.461 16:50:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.461 16:50:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.461 16:50:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.461 16:50:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.461 16:50:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.719 16:50:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:02.719 { 00:05:02.719 "nbd_device": "/dev/nbd0", 00:05:02.719 "bdev_name": "Malloc0" 00:05:02.719 }, 00:05:02.719 { 00:05:02.719 "nbd_device": "/dev/nbd1", 00:05:02.719 "bdev_name": "Malloc1" 00:05:02.719 } 00:05:02.720 ]' 00:05:02.720 16:50:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:02.720 { 00:05:02.720 "nbd_device": "/dev/nbd0", 00:05:02.720 "bdev_name": "Malloc0" 00:05:02.720 }, 00:05:02.720 { 00:05:02.720 "nbd_device": "/dev/nbd1", 00:05:02.720 "bdev_name": "Malloc1" 00:05:02.720 } 00:05:02.720 ]' 00:05:02.720 16:50:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:02.978 /dev/nbd1' 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:02.978 /dev/nbd1' 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:02.978 256+0 records in 00:05:02.978 256+0 records out 00:05:02.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107437 s, 97.6 MB/s 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:02.978 256+0 records in 00:05:02.978 256+0 records out 00:05:02.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222735 s, 47.1 MB/s 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:02.978 256+0 records in 00:05:02.978 256+0 records out 00:05:02.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247894 s, 42.3 MB/s 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.978 16:50:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:03.235 16:50:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:03.235 16:50:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:03.235 16:50:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:03.235 16:50:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.235 16:50:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.235 16:50:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:03.235 16:50:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.235 16:50:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.235 16:50:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.235 16:50:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:03.493 16:50:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:03.772 16:50:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:03.772 16:50:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:03.772 16:50:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.772 16:50:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.772 16:50:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:03.772 16:50:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.772 16:50:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.772 16:50:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.772 16:50:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.772 16:50:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.031 16:50:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:04.031 16:50:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:04.031 16:50:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.031 16:50:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:04.031 16:50:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:04.031 16:50:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.031 16:50:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:04.031 16:50:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:04.031 16:50:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:04.031 16:50:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:04.031 16:50:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:04.031 16:50:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:04.031 16:50:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:04.289 16:50:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:04.548 [2024-11-20 16:50:22.841285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.548 [2024-11-20 16:50:22.881913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.548 [2024-11-20 16:50:22.881918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.548 [2024-11-20 16:50:22.935412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:04.548 [2024-11-20 16:50:22.935516] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:04.548 [2024-11-20 16:50:22.935537] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:07.837 spdk_app_start Round 2 00:05:07.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:07.837 16:50:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:07.837 16:50:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:07.837 16:50:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58221 /var/tmp/spdk-nbd.sock 00:05:07.837 16:50:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58221 ']' 00:05:07.837 16:50:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:07.837 16:50:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.837 16:50:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:07.837 16:50:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.837 16:50:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.837 16:50:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.837 16:50:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:07.837 16:50:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.837 Malloc0 00:05:07.837 16:50:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.097 Malloc1 00:05:08.097 16:50:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.097 16:50:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.097 16:50:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.097 16:50:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:08.097 16:50:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.097 16:50:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:08.097 16:50:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.097 16:50:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.097 16:50:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.097 16:50:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:08.097 16:50:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.097 16:50:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:08.097 16:50:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:08.097 16:50:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:08.097 16:50:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.097 16:50:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:08.356 /dev/nbd0 00:05:08.356 16:50:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:08.356 16:50:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:08.356 16:50:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:08.356 16:50:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:08.356 16:50:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:08.356 16:50:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:08.356 16:50:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:08.356 16:50:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:08.356 16:50:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:08.356 16:50:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:08.356 16:50:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.356 1+0 records in 00:05:08.356 1+0 records out 00:05:08.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221607 s, 18.5 MB/s 00:05:08.356 16:50:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.356 16:50:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:08.356 16:50:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.356 16:50:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:08.357 16:50:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:08.357 16:50:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.357 16:50:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.357 16:50:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:08.671 /dev/nbd1 00:05:08.931 16:50:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:08.931 16:50:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:08.931 16:50:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:08.931 16:50:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:08.931 16:50:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:08.931 16:50:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:08.931 16:50:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:08.931 16:50:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:08.931 16:50:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:08.931 16:50:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:08.931 16:50:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.931 1+0 records in 00:05:08.931 1+0 records out 00:05:08.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271943 s, 15.1 MB/s 00:05:08.931 16:50:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.931 16:50:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:08.931 16:50:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.931 16:50:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:08.931 16:50:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:08.931 16:50:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.931 16:50:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.931 16:50:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.931 16:50:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.931 16:50:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:09.191 { 00:05:09.191 "nbd_device": "/dev/nbd0", 00:05:09.191 "bdev_name": "Malloc0" 00:05:09.191 }, 00:05:09.191 { 00:05:09.191 "nbd_device": "/dev/nbd1", 00:05:09.191 "bdev_name": "Malloc1" 00:05:09.191 } 00:05:09.191 ]' 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:09.191 { 00:05:09.191 "nbd_device": "/dev/nbd0", 00:05:09.191 "bdev_name": "Malloc0" 00:05:09.191 }, 00:05:09.191 { 00:05:09.191 "nbd_device": "/dev/nbd1", 00:05:09.191 "bdev_name": "Malloc1" 00:05:09.191 } 00:05:09.191 ]' 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:09.191 /dev/nbd1' 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:09.191 /dev/nbd1' 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:09.191 256+0 records in 00:05:09.191 256+0 records out 00:05:09.191 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00852837 s, 123 MB/s 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:09.191 256+0 records in 00:05:09.191 256+0 records out 00:05:09.191 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235577 s, 44.5 MB/s 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:09.191 256+0 records in 00:05:09.191 256+0 records out 00:05:09.191 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246519 s, 42.5 MB/s 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:09.191 16:50:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:09.192 16:50:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.192 16:50:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:09.450 16:50:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:09.450 16:50:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:09.450 16:50:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:09.450 16:50:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.450 16:50:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.450 16:50:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:09.450 16:50:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.450 16:50:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.450 16:50:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.450 16:50:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:10.016 16:50:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:10.017 16:50:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:10.017 16:50:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:10.017 16:50:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.017 16:50:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.017 16:50:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:10.017 16:50:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.017 16:50:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.017 16:50:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.017 16:50:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.017 16:50:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.276 16:50:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:10.276 16:50:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:10.276 16:50:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.276 16:50:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:10.276 16:50:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:10.276 16:50:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.276 16:50:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:10.276 16:50:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:10.276 16:50:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:10.276 16:50:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:10.276 16:50:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:10.276 16:50:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:10.276 16:50:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:10.534 16:50:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:10.793 [2024-11-20 16:50:29.086750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.793 [2024-11-20 16:50:29.147115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.793 [2024-11-20 16:50:29.147130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.793 [2024-11-20 16:50:29.202432] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:10.793 [2024-11-20 16:50:29.202538] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:10.793 [2024-11-20 16:50:29.202567] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:14.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.095 16:50:31 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58221 /var/tmp/spdk-nbd.sock 00:05:14.095 16:50:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58221 ']' 00:05:14.095 16:50:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.095 16:50:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.095 16:50:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.095 16:50:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.095 16:50:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.095 16:50:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.095 16:50:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:14.095 16:50:32 event.app_repeat -- event/event.sh@39 -- # killprocess 58221 00:05:14.095 16:50:32 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58221 ']' 00:05:14.095 16:50:32 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58221 00:05:14.095 16:50:32 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:14.095 16:50:32 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.095 16:50:32 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58221 00:05:14.095 killing process with pid 58221 00:05:14.095 16:50:32 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.095 16:50:32 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.095 16:50:32 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58221' 00:05:14.095 16:50:32 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58221 00:05:14.095 16:50:32 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58221 00:05:14.095 spdk_app_start is called in Round 0. 00:05:14.095 Shutdown signal received, stop current app iteration 00:05:14.095 Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 reinitialization... 00:05:14.095 spdk_app_start is called in Round 1. 00:05:14.095 Shutdown signal received, stop current app iteration 00:05:14.095 Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 reinitialization... 00:05:14.095 spdk_app_start is called in Round 2. 00:05:14.095 Shutdown signal received, stop current app iteration 00:05:14.095 Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 reinitialization... 00:05:14.095 spdk_app_start is called in Round 3. 00:05:14.095 Shutdown signal received, stop current app iteration 00:05:14.095 16:50:32 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:14.095 16:50:32 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:14.095 00:05:14.095 real 0m19.102s 00:05:14.095 user 0m43.698s 00:05:14.095 sys 0m2.825s 00:05:14.095 16:50:32 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.095 16:50:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.095 ************************************ 00:05:14.095 END TEST app_repeat 00:05:14.095 ************************************ 00:05:14.095 16:50:32 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:14.095 16:50:32 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:14.095 16:50:32 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.095 16:50:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.095 16:50:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.095 ************************************ 00:05:14.095 START TEST cpu_locks 00:05:14.095 ************************************ 00:05:14.095 16:50:32 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:14.095 * Looking for test storage... 00:05:14.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:14.095 16:50:32 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:14.095 16:50:32 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:14.095 16:50:32 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:14.354 16:50:32 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.354 16:50:32 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:14.355 16:50:32 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.355 16:50:32 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:14.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.355 --rc genhtml_branch_coverage=1 00:05:14.355 --rc genhtml_function_coverage=1 00:05:14.355 --rc genhtml_legend=1 00:05:14.355 --rc geninfo_all_blocks=1 00:05:14.355 --rc geninfo_unexecuted_blocks=1 00:05:14.355 00:05:14.355 ' 00:05:14.355 16:50:32 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:14.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.355 --rc genhtml_branch_coverage=1 00:05:14.355 --rc genhtml_function_coverage=1 00:05:14.355 --rc genhtml_legend=1 00:05:14.355 --rc geninfo_all_blocks=1 00:05:14.355 --rc geninfo_unexecuted_blocks=1 00:05:14.355 00:05:14.355 ' 00:05:14.355 16:50:32 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:14.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.355 --rc genhtml_branch_coverage=1 00:05:14.355 --rc genhtml_function_coverage=1 00:05:14.355 --rc genhtml_legend=1 00:05:14.355 --rc geninfo_all_blocks=1 00:05:14.355 --rc geninfo_unexecuted_blocks=1 00:05:14.355 00:05:14.355 ' 00:05:14.355 16:50:32 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:14.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.355 --rc genhtml_branch_coverage=1 00:05:14.355 --rc genhtml_function_coverage=1 00:05:14.355 --rc genhtml_legend=1 00:05:14.355 --rc geninfo_all_blocks=1 00:05:14.355 --rc geninfo_unexecuted_blocks=1 00:05:14.355 00:05:14.355 ' 00:05:14.355 16:50:32 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:14.355 16:50:32 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:14.355 16:50:32 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:14.355 16:50:32 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:14.355 16:50:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.355 16:50:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.355 16:50:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.355 ************************************ 00:05:14.355 START TEST default_locks 00:05:14.355 ************************************ 00:05:14.355 16:50:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:14.355 16:50:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58661 00:05:14.355 16:50:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.355 16:50:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58661 00:05:14.355 16:50:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58661 ']' 00:05:14.355 16:50:32 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.355 16:50:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.355 16:50:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.355 16:50:32 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.355 16:50:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.355 [2024-11-20 16:50:32.783272] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:14.355 [2024-11-20 16:50:32.783395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58661 ] 00:05:14.614 [2024-11-20 16:50:32.922204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.614 [2024-11-20 16:50:32.979263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.614 [2024-11-20 16:50:33.051453] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:14.872 16:50:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.872 16:50:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:14.872 16:50:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58661 00:05:14.872 16:50:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58661 00:05:14.872 16:50:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:15.440 16:50:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58661 00:05:15.440 16:50:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58661 ']' 00:05:15.440 16:50:33 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58661 00:05:15.440 16:50:33 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:15.440 16:50:33 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.440 16:50:33 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58661 00:05:15.440 16:50:33 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.440 killing process with pid 58661 00:05:15.440 16:50:33 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.440 16:50:33 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58661' 00:05:15.440 16:50:33 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58661 00:05:15.440 16:50:33 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58661 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58661 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58661 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58661 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58661 ']' 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.699 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58661) - No such process 00:05:15.699 ERROR: process (pid: 58661) is no longer running 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:15.699 00:05:15.699 real 0m1.408s 00:05:15.699 user 0m1.376s 00:05:15.699 sys 0m0.548s 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.699 16:50:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.699 ************************************ 00:05:15.699 END TEST default_locks 00:05:15.699 ************************************ 00:05:15.699 16:50:34 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:15.699 16:50:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.699 16:50:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.699 16:50:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.699 ************************************ 00:05:15.699 START TEST default_locks_via_rpc 00:05:15.699 ************************************ 00:05:15.699 16:50:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:15.699 16:50:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58705 00:05:15.699 16:50:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58705 00:05:15.699 16:50:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.699 16:50:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58705 ']' 00:05:15.699 16:50:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.699 16:50:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.699 16:50:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.699 16:50:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.699 16:50:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.958 [2024-11-20 16:50:34.249753] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:15.958 [2024-11-20 16:50:34.249893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58705 ] 00:05:15.958 [2024-11-20 16:50:34.399644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.958 [2024-11-20 16:50:34.456111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.217 [2024-11-20 16:50:34.528889] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:16.217 16:50:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.217 16:50:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:16.217 16:50:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:16.217 16:50:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.217 16:50:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.217 16:50:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.217 16:50:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:16.217 16:50:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:16.217 16:50:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:16.217 16:50:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:16.217 16:50:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:16.217 16:50:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.217 16:50:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.217 16:50:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.217 16:50:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58705 00:05:16.217 16:50:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.217 16:50:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58705 00:05:16.785 16:50:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58705 00:05:16.785 16:50:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58705 ']' 00:05:16.785 16:50:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58705 00:05:16.785 16:50:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:16.785 16:50:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.785 16:50:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58705 00:05:16.785 16:50:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.785 16:50:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.785 killing process with pid 58705 00:05:16.785 16:50:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58705' 00:05:16.785 16:50:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58705 00:05:16.785 16:50:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58705 00:05:17.354 00:05:17.354 real 0m1.463s 00:05:17.354 user 0m1.441s 00:05:17.354 sys 0m0.567s 00:05:17.354 16:50:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.354 16:50:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.354 ************************************ 00:05:17.354 END TEST default_locks_via_rpc 00:05:17.354 ************************************ 00:05:17.354 16:50:35 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:17.354 16:50:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.354 16:50:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.354 16:50:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.354 ************************************ 00:05:17.354 START TEST non_locking_app_on_locked_coremask 00:05:17.354 ************************************ 00:05:17.354 16:50:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:17.354 16:50:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58749 00:05:17.354 16:50:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.354 16:50:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58749 /var/tmp/spdk.sock 00:05:17.354 16:50:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58749 ']' 00:05:17.354 16:50:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.354 16:50:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.354 16:50:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.354 16:50:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.354 16:50:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.354 [2024-11-20 16:50:35.762190] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:17.354 [2024-11-20 16:50:35.762319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58749 ] 00:05:17.640 [2024-11-20 16:50:35.911690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.640 [2024-11-20 16:50:35.972886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.640 [2024-11-20 16:50:36.046766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:18.591 16:50:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.591 16:50:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:18.591 16:50:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58765 00:05:18.591 16:50:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:18.591 16:50:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58765 /var/tmp/spdk2.sock 00:05:18.591 16:50:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58765 ']' 00:05:18.591 16:50:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.591 16:50:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.591 16:50:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.591 16:50:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.591 16:50:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.591 [2024-11-20 16:50:36.856730] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:18.592 [2024-11-20 16:50:36.856891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58765 ] 00:05:18.592 [2024-11-20 16:50:37.021544] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:18.592 [2024-11-20 16:50:37.021593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.850 [2024-11-20 16:50:37.149632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.850 [2024-11-20 16:50:37.305409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:19.418 16:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.418 16:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:19.418 16:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58749 00:05:19.418 16:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58749 00:05:19.418 16:50:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.354 16:50:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58749 00:05:20.354 16:50:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58749 ']' 00:05:20.355 16:50:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58749 00:05:20.355 16:50:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:20.355 16:50:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.355 16:50:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58749 00:05:20.355 16:50:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.355 16:50:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.355 killing process with pid 58749 00:05:20.355 16:50:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58749' 00:05:20.355 16:50:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58749 00:05:20.355 16:50:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58749 00:05:21.290 16:50:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58765 00:05:21.290 16:50:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58765 ']' 00:05:21.290 16:50:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58765 00:05:21.290 16:50:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:21.290 16:50:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.290 16:50:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58765 00:05:21.290 16:50:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.290 16:50:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.290 killing process with pid 58765 00:05:21.290 16:50:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58765' 00:05:21.290 16:50:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58765 00:05:21.290 16:50:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58765 00:05:21.550 00:05:21.550 real 0m4.286s 00:05:21.550 user 0m4.844s 00:05:21.550 sys 0m1.165s 00:05:21.550 16:50:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.550 16:50:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.550 ************************************ 00:05:21.550 END TEST non_locking_app_on_locked_coremask 00:05:21.550 ************************************ 00:05:21.550 16:50:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:21.550 16:50:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.550 16:50:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.550 16:50:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.550 ************************************ 00:05:21.550 START TEST locking_app_on_unlocked_coremask 00:05:21.550 ************************************ 00:05:21.550 16:50:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:21.550 16:50:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58832 00:05:21.550 16:50:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58832 /var/tmp/spdk.sock 00:05:21.550 16:50:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:21.550 16:50:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58832 ']' 00:05:21.550 16:50:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.550 16:50:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.550 16:50:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.550 16:50:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.550 16:50:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.810 [2024-11-20 16:50:40.093095] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:21.810 [2024-11-20 16:50:40.093228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58832 ] 00:05:21.810 [2024-11-20 16:50:40.238498] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:21.810 [2024-11-20 16:50:40.238551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.810 [2024-11-20 16:50:40.293833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.069 [2024-11-20 16:50:40.363011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:22.707 16:50:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.707 16:50:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:22.707 16:50:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58848 00:05:22.707 16:50:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58848 /var/tmp/spdk2.sock 00:05:22.707 16:50:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:22.707 16:50:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58848 ']' 00:05:22.707 16:50:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:22.707 16:50:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.707 16:50:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:22.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:22.707 16:50:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.707 16:50:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.707 [2024-11-20 16:50:41.161295] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:22.707 [2024-11-20 16:50:41.161380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58848 ] 00:05:22.966 [2024-11-20 16:50:41.321048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.966 [2024-11-20 16:50:41.444335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.225 [2024-11-20 16:50:41.601814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:23.794 16:50:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.794 16:50:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:23.794 16:50:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58848 00:05:23.794 16:50:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58848 00:05:23.794 16:50:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.732 16:50:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58832 00:05:24.732 16:50:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58832 ']' 00:05:24.732 16:50:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58832 00:05:24.732 16:50:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:24.732 16:50:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.732 16:50:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58832 00:05:24.732 killing process with pid 58832 00:05:24.732 16:50:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.732 16:50:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.732 16:50:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58832' 00:05:24.732 16:50:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58832 00:05:24.732 16:50:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58832 00:05:25.301 16:50:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58848 00:05:25.301 16:50:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58848 ']' 00:05:25.301 16:50:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58848 00:05:25.301 16:50:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:25.301 16:50:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.301 16:50:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58848 00:05:25.301 killing process with pid 58848 00:05:25.301 16:50:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.301 16:50:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.301 16:50:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58848' 00:05:25.301 16:50:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58848 00:05:25.301 16:50:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58848 00:05:25.869 ************************************ 00:05:25.869 END TEST locking_app_on_unlocked_coremask 00:05:25.869 ************************************ 00:05:25.869 00:05:25.869 real 0m4.098s 00:05:25.869 user 0m4.592s 00:05:25.869 sys 0m1.152s 00:05:25.869 16:50:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.869 16:50:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.869 16:50:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:25.869 16:50:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.869 16:50:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.869 16:50:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.869 ************************************ 00:05:25.869 START TEST locking_app_on_locked_coremask 00:05:25.869 ************************************ 00:05:25.869 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:25.869 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58915 00:05:25.869 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58915 /var/tmp/spdk.sock 00:05:25.869 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.869 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58915 ']' 00:05:25.869 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.869 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.869 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.869 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.869 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.869 [2024-11-20 16:50:44.251297] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:25.869 [2024-11-20 16:50:44.251577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58915 ] 00:05:25.869 [2024-11-20 16:50:44.397614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.128 [2024-11-20 16:50:44.442533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.128 [2024-11-20 16:50:44.513389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:26.387 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.387 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:26.387 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58923 00:05:26.387 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58923 /var/tmp/spdk2.sock 00:05:26.387 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:26.387 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58923 /var/tmp/spdk2.sock 00:05:26.387 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:26.387 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:26.387 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.387 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:26.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.387 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.387 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58923 /var/tmp/spdk2.sock 00:05:26.387 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58923 ']' 00:05:26.387 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.387 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.387 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.387 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.387 16:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.387 [2024-11-20 16:50:44.781486] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:26.387 [2024-11-20 16:50:44.781579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58923 ] 00:05:26.646 [2024-11-20 16:50:44.941644] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58915 has claimed it. 00:05:26.646 [2024-11-20 16:50:44.941731] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:27.215 ERROR: process (pid: 58923) is no longer running 00:05:27.215 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58923) - No such process 00:05:27.215 16:50:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.215 16:50:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:27.215 16:50:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:27.215 16:50:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:27.215 16:50:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:27.215 16:50:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:27.215 16:50:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58915 00:05:27.215 16:50:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58915 00:05:27.215 16:50:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.474 16:50:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58915 00:05:27.474 16:50:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58915 ']' 00:05:27.474 16:50:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58915 00:05:27.474 16:50:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:27.474 16:50:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.474 16:50:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58915 00:05:27.474 killing process with pid 58915 00:05:27.474 16:50:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.474 16:50:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.474 16:50:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58915' 00:05:27.474 16:50:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58915 00:05:27.474 16:50:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58915 00:05:28.042 ************************************ 00:05:28.042 END TEST locking_app_on_locked_coremask 00:05:28.042 ************************************ 00:05:28.042 00:05:28.042 real 0m2.167s 00:05:28.042 user 0m2.443s 00:05:28.042 sys 0m0.596s 00:05:28.042 16:50:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.042 16:50:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.042 16:50:46 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:28.042 16:50:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.042 16:50:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.042 16:50:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.042 ************************************ 00:05:28.042 START TEST locking_overlapped_coremask 00:05:28.042 ************************************ 00:05:28.042 16:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:28.042 16:50:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58973 00:05:28.042 16:50:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58973 /var/tmp/spdk.sock 00:05:28.042 16:50:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:28.042 16:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58973 ']' 00:05:28.042 16:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.042 16:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.042 16:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.042 16:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.042 16:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.042 [2024-11-20 16:50:46.471200] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:28.042 [2024-11-20 16:50:46.471304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58973 ] 00:05:28.300 [2024-11-20 16:50:46.624005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.300 [2024-11-20 16:50:46.689996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.300 [2024-11-20 16:50:46.690105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.300 [2024-11-20 16:50:46.690112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.300 [2024-11-20 16:50:46.766300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:29.235 16:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.235 16:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:29.235 16:50:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58992 00:05:29.235 16:50:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58992 /var/tmp/spdk2.sock 00:05:29.235 16:50:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:29.235 16:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:29.235 16:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58992 /var/tmp/spdk2.sock 00:05:29.235 16:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:29.235 16:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.235 16:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:29.235 16:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.236 16:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58992 /var/tmp/spdk2.sock 00:05:29.236 16:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58992 ']' 00:05:29.236 16:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.236 16:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.236 16:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.236 16:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.236 16:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.236 [2024-11-20 16:50:47.565779] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:29.236 [2024-11-20 16:50:47.565905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58992 ] 00:05:29.236 [2024-11-20 16:50:47.737318] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58973 has claimed it. 00:05:29.236 [2024-11-20 16:50:47.737395] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:29.803 ERROR: process (pid: 58992) is no longer running 00:05:29.803 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58992) - No such process 00:05:29.803 16:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.803 16:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:29.803 16:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:29.804 16:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:29.804 16:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:29.804 16:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:29.804 16:50:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:29.804 16:50:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:29.804 16:50:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:29.804 16:50:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:29.804 16:50:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58973 00:05:29.804 16:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 58973 ']' 00:05:29.804 16:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 58973 00:05:29.804 16:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:29.804 16:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.804 16:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58973 00:05:29.804 16:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.804 killing process with pid 58973 00:05:29.804 16:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.804 16:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58973' 00:05:29.804 16:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 58973 00:05:29.804 16:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 58973 00:05:30.372 00:05:30.372 real 0m2.315s 00:05:30.372 user 0m6.648s 00:05:30.372 sys 0m0.461s 00:05:30.372 ************************************ 00:05:30.372 END TEST locking_overlapped_coremask 00:05:30.372 ************************************ 00:05:30.372 16:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.372 16:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.372 16:50:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:30.372 16:50:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.372 16:50:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.372 16:50:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.372 ************************************ 00:05:30.372 START TEST locking_overlapped_coremask_via_rpc 00:05:30.372 ************************************ 00:05:30.372 16:50:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:30.372 16:50:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59032 00:05:30.372 16:50:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:30.372 16:50:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59032 /var/tmp/spdk.sock 00:05:30.372 16:50:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59032 ']' 00:05:30.372 16:50:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.372 16:50:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.372 16:50:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.372 16:50:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.372 16:50:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.372 [2024-11-20 16:50:48.841749] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:30.372 [2024-11-20 16:50:48.841896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59032 ] 00:05:30.632 [2024-11-20 16:50:48.987888] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.632 [2024-11-20 16:50:48.987948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:30.632 [2024-11-20 16:50:49.052726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.632 [2024-11-20 16:50:49.052814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.632 [2024-11-20 16:50:49.052822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.632 [2024-11-20 16:50:49.126005] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:31.566 16:50:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.567 16:50:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:31.567 16:50:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59050 00:05:31.567 16:50:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59050 /var/tmp/spdk2.sock 00:05:31.567 16:50:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:31.567 16:50:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59050 ']' 00:05:31.567 16:50:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.567 16:50:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.567 16:50:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.567 16:50:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.567 16:50:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.567 [2024-11-20 16:50:49.896513] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:31.567 [2024-11-20 16:50:49.896619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59050 ] 00:05:31.567 [2024-11-20 16:50:50.061189] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:31.567 [2024-11-20 16:50:50.061236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:31.825 [2024-11-20 16:50:50.189229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.826 [2024-11-20 16:50:50.192964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.826 [2024-11-20 16:50:50.192964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:31.826 [2024-11-20 16:50:50.329872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.394 [2024-11-20 16:50:50.912993] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59032 has claimed it. 00:05:32.394 request: 00:05:32.394 { 00:05:32.394 "method": "framework_enable_cpumask_locks", 00:05:32.394 "req_id": 1 00:05:32.394 } 00:05:32.394 Got JSON-RPC error response 00:05:32.394 response: 00:05:32.394 { 00:05:32.394 "code": -32603, 00:05:32.394 "message": "Failed to claim CPU core: 2" 00:05:32.394 } 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59032 /var/tmp/spdk.sock 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59032 ']' 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.394 16:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.980 16:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.980 16:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:32.980 16:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59050 /var/tmp/spdk2.sock 00:05:32.980 16:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59050 ']' 00:05:32.980 16:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.980 16:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.980 16:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.980 16:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.981 16:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.261 16:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.261 16:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:33.261 ************************************ 00:05:33.261 END TEST locking_overlapped_coremask_via_rpc 00:05:33.261 ************************************ 00:05:33.261 16:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:33.261 16:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:33.261 16:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:33.261 16:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:33.261 00:05:33.261 real 0m2.775s 00:05:33.261 user 0m1.488s 00:05:33.261 sys 0m0.207s 00:05:33.261 16:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.261 16:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.261 16:50:51 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:33.261 16:50:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59032 ]] 00:05:33.261 16:50:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59032 00:05:33.261 16:50:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59032 ']' 00:05:33.261 16:50:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59032 00:05:33.261 16:50:51 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:33.261 16:50:51 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.261 16:50:51 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59032 00:05:33.261 killing process with pid 59032 00:05:33.261 16:50:51 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.261 16:50:51 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.261 16:50:51 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59032' 00:05:33.261 16:50:51 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59032 00:05:33.261 16:50:51 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59032 00:05:33.520 16:50:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59050 ]] 00:05:33.520 16:50:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59050 00:05:33.520 16:50:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59050 ']' 00:05:33.520 16:50:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59050 00:05:33.520 16:50:52 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:33.520 16:50:52 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.520 16:50:52 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59050 00:05:33.779 killing process with pid 59050 00:05:33.779 16:50:52 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:33.779 16:50:52 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:33.779 16:50:52 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59050' 00:05:33.779 16:50:52 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59050 00:05:33.779 16:50:52 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59050 00:05:34.038 16:50:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:34.038 16:50:52 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:34.038 16:50:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59032 ]] 00:05:34.038 16:50:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59032 00:05:34.038 16:50:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59032 ']' 00:05:34.038 16:50:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59032 00:05:34.038 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59032) - No such process 00:05:34.038 Process with pid 59032 is not found 00:05:34.038 Process with pid 59050 is not found 00:05:34.038 16:50:52 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59032 is not found' 00:05:34.038 16:50:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59050 ]] 00:05:34.038 16:50:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59050 00:05:34.038 16:50:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59050 ']' 00:05:34.038 16:50:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59050 00:05:34.038 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59050) - No such process 00:05:34.038 16:50:52 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59050 is not found' 00:05:34.038 16:50:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:34.038 00:05:34.038 real 0m19.930s 00:05:34.038 user 0m36.300s 00:05:34.038 sys 0m5.631s 00:05:34.038 16:50:52 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.038 16:50:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.038 ************************************ 00:05:34.038 END TEST cpu_locks 00:05:34.038 ************************************ 00:05:34.038 ************************************ 00:05:34.038 END TEST event 00:05:34.038 ************************************ 00:05:34.038 00:05:34.038 real 0m48.006s 00:05:34.038 user 1m35.442s 00:05:34.038 sys 0m9.263s 00:05:34.038 16:50:52 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.038 16:50:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.038 16:50:52 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:34.038 16:50:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.038 16:50:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.038 16:50:52 -- common/autotest_common.sh@10 -- # set +x 00:05:34.038 ************************************ 00:05:34.038 START TEST thread 00:05:34.038 ************************************ 00:05:34.038 16:50:52 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:34.297 * Looking for test storage... 00:05:34.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:34.297 16:50:52 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.297 16:50:52 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.297 16:50:52 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.297 16:50:52 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.297 16:50:52 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.297 16:50:52 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.297 16:50:52 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.297 16:50:52 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.297 16:50:52 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.297 16:50:52 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.297 16:50:52 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.297 16:50:52 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.297 16:50:52 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.297 16:50:52 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.297 16:50:52 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.297 16:50:52 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:34.298 16:50:52 thread -- scripts/common.sh@345 -- # : 1 00:05:34.298 16:50:52 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.298 16:50:52 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.298 16:50:52 thread -- scripts/common.sh@365 -- # decimal 1 00:05:34.298 16:50:52 thread -- scripts/common.sh@353 -- # local d=1 00:05:34.298 16:50:52 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.298 16:50:52 thread -- scripts/common.sh@355 -- # echo 1 00:05:34.298 16:50:52 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.298 16:50:52 thread -- scripts/common.sh@366 -- # decimal 2 00:05:34.298 16:50:52 thread -- scripts/common.sh@353 -- # local d=2 00:05:34.298 16:50:52 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.298 16:50:52 thread -- scripts/common.sh@355 -- # echo 2 00:05:34.298 16:50:52 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.298 16:50:52 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.298 16:50:52 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.298 16:50:52 thread -- scripts/common.sh@368 -- # return 0 00:05:34.298 16:50:52 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.298 16:50:52 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.298 --rc genhtml_branch_coverage=1 00:05:34.298 --rc genhtml_function_coverage=1 00:05:34.298 --rc genhtml_legend=1 00:05:34.298 --rc geninfo_all_blocks=1 00:05:34.298 --rc geninfo_unexecuted_blocks=1 00:05:34.298 00:05:34.298 ' 00:05:34.298 16:50:52 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.298 --rc genhtml_branch_coverage=1 00:05:34.298 --rc genhtml_function_coverage=1 00:05:34.298 --rc genhtml_legend=1 00:05:34.298 --rc geninfo_all_blocks=1 00:05:34.298 --rc geninfo_unexecuted_blocks=1 00:05:34.298 00:05:34.298 ' 00:05:34.298 16:50:52 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.298 --rc genhtml_branch_coverage=1 00:05:34.298 --rc genhtml_function_coverage=1 00:05:34.298 --rc genhtml_legend=1 00:05:34.298 --rc geninfo_all_blocks=1 00:05:34.298 --rc geninfo_unexecuted_blocks=1 00:05:34.298 00:05:34.298 ' 00:05:34.298 16:50:52 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.298 --rc genhtml_branch_coverage=1 00:05:34.298 --rc genhtml_function_coverage=1 00:05:34.298 --rc genhtml_legend=1 00:05:34.298 --rc geninfo_all_blocks=1 00:05:34.298 --rc geninfo_unexecuted_blocks=1 00:05:34.298 00:05:34.298 ' 00:05:34.298 16:50:52 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:34.298 16:50:52 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:34.298 16:50:52 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.298 16:50:52 thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.298 ************************************ 00:05:34.298 START TEST thread_poller_perf 00:05:34.298 ************************************ 00:05:34.298 16:50:52 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:34.298 [2024-11-20 16:50:52.782461] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:34.298 [2024-11-20 16:50:52.783269] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59186 ] 00:05:34.557 [2024-11-20 16:50:52.927347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.557 [2024-11-20 16:50:52.979784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.557 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:35.935 [2024-11-20T16:50:54.471Z] ====================================== 00:05:35.935 [2024-11-20T16:50:54.471Z] busy:2211607034 (cyc) 00:05:35.935 [2024-11-20T16:50:54.471Z] total_run_count: 348000 00:05:35.935 [2024-11-20T16:50:54.471Z] tsc_hz: 2200000000 (cyc) 00:05:35.935 [2024-11-20T16:50:54.471Z] ====================================== 00:05:35.935 [2024-11-20T16:50:54.471Z] poller_cost: 6355 (cyc), 2888 (nsec) 00:05:35.935 00:05:35.935 real 0m1.270s 00:05:35.935 user 0m1.121s 00:05:35.935 sys 0m0.040s 00:05:35.935 16:50:54 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.935 ************************************ 00:05:35.935 END TEST thread_poller_perf 00:05:35.935 ************************************ 00:05:35.935 16:50:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:35.935 16:50:54 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:35.935 16:50:54 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:35.935 16:50:54 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.935 16:50:54 thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.935 ************************************ 00:05:35.935 START TEST thread_poller_perf 00:05:35.935 ************************************ 00:05:35.935 16:50:54 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:35.935 [2024-11-20 16:50:54.108476] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:35.935 [2024-11-20 16:50:54.108582] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59222 ] 00:05:35.935 [2024-11-20 16:50:54.254903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.935 [2024-11-20 16:50:54.298448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.935 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:36.872 [2024-11-20T16:50:55.408Z] ====================================== 00:05:36.872 [2024-11-20T16:50:55.408Z] busy:2201847230 (cyc) 00:05:36.872 [2024-11-20T16:50:55.408Z] total_run_count: 4598000 00:05:36.872 [2024-11-20T16:50:55.408Z] tsc_hz: 2200000000 (cyc) 00:05:36.872 [2024-11-20T16:50:55.408Z] ====================================== 00:05:36.872 [2024-11-20T16:50:55.408Z] poller_cost: 478 (cyc), 217 (nsec) 00:05:36.872 ************************************ 00:05:36.872 END TEST thread_poller_perf 00:05:36.872 ************************************ 00:05:36.872 00:05:36.872 real 0m1.255s 00:05:36.872 user 0m1.108s 00:05:36.872 sys 0m0.041s 00:05:36.872 16:50:55 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.872 16:50:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.872 16:50:55 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:36.872 ************************************ 00:05:36.872 END TEST thread 00:05:36.872 ************************************ 00:05:36.872 00:05:36.872 real 0m2.847s 00:05:36.872 user 0m2.393s 00:05:36.872 sys 0m0.234s 00:05:36.872 16:50:55 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.872 16:50:55 thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.131 16:50:55 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:37.131 16:50:55 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:37.131 16:50:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.131 16:50:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.131 16:50:55 -- common/autotest_common.sh@10 -- # set +x 00:05:37.131 ************************************ 00:05:37.131 START TEST app_cmdline 00:05:37.131 ************************************ 00:05:37.131 16:50:55 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:37.131 * Looking for test storage... 00:05:37.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:37.131 16:50:55 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:37.131 16:50:55 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:37.131 16:50:55 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:37.131 16:50:55 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:37.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.131 16:50:55 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:37.131 16:50:55 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.131 16:50:55 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:37.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.131 --rc genhtml_branch_coverage=1 00:05:37.131 --rc genhtml_function_coverage=1 00:05:37.131 --rc genhtml_legend=1 00:05:37.131 --rc geninfo_all_blocks=1 00:05:37.131 --rc geninfo_unexecuted_blocks=1 00:05:37.131 00:05:37.131 ' 00:05:37.131 16:50:55 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:37.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.131 --rc genhtml_branch_coverage=1 00:05:37.131 --rc genhtml_function_coverage=1 00:05:37.131 --rc genhtml_legend=1 00:05:37.131 --rc geninfo_all_blocks=1 00:05:37.131 --rc geninfo_unexecuted_blocks=1 00:05:37.131 00:05:37.131 ' 00:05:37.131 16:50:55 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:37.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.131 --rc genhtml_branch_coverage=1 00:05:37.131 --rc genhtml_function_coverage=1 00:05:37.131 --rc genhtml_legend=1 00:05:37.131 --rc geninfo_all_blocks=1 00:05:37.131 --rc geninfo_unexecuted_blocks=1 00:05:37.131 00:05:37.131 ' 00:05:37.131 16:50:55 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:37.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.131 --rc genhtml_branch_coverage=1 00:05:37.131 --rc genhtml_function_coverage=1 00:05:37.131 --rc genhtml_legend=1 00:05:37.131 --rc geninfo_all_blocks=1 00:05:37.131 --rc geninfo_unexecuted_blocks=1 00:05:37.131 00:05:37.131 ' 00:05:37.131 16:50:55 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:37.131 16:50:55 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59299 00:05:37.131 16:50:55 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59299 00:05:37.131 16:50:55 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:37.131 16:50:55 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59299 ']' 00:05:37.131 16:50:55 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.131 16:50:55 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.131 16:50:55 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.131 16:50:55 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.131 16:50:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:37.390 [2024-11-20 16:50:55.695004] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:37.390 [2024-11-20 16:50:55.695293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59299 ] 00:05:37.390 [2024-11-20 16:50:55.842740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.390 [2024-11-20 16:50:55.888421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.648 [2024-11-20 16:50:55.957738] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:37.648 16:50:56 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.648 16:50:56 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:37.648 16:50:56 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:37.907 { 00:05:37.907 "version": "SPDK v25.01-pre git sha1 2e015e34f", 00:05:37.907 "fields": { 00:05:37.907 "major": 25, 00:05:37.907 "minor": 1, 00:05:37.907 "patch": 0, 00:05:37.907 "suffix": "-pre", 00:05:37.907 "commit": "2e015e34f" 00:05:37.907 } 00:05:37.907 } 00:05:37.907 16:50:56 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:37.907 16:50:56 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:37.907 16:50:56 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:37.907 16:50:56 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:37.907 16:50:56 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:37.907 16:50:56 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:37.907 16:50:56 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:37.907 16:50:56 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.907 16:50:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:37.907 16:50:56 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.166 16:50:56 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:38.166 16:50:56 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:38.166 16:50:56 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:38.166 16:50:56 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:38.166 16:50:56 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:38.166 16:50:56 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:38.166 16:50:56 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.166 16:50:56 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:38.166 16:50:56 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.166 16:50:56 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:38.166 16:50:56 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.166 16:50:56 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:38.166 16:50:56 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:38.166 16:50:56 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:38.442 request: 00:05:38.442 { 00:05:38.442 "method": "env_dpdk_get_mem_stats", 00:05:38.442 "req_id": 1 00:05:38.442 } 00:05:38.442 Got JSON-RPC error response 00:05:38.442 response: 00:05:38.442 { 00:05:38.442 "code": -32601, 00:05:38.442 "message": "Method not found" 00:05:38.442 } 00:05:38.442 16:50:56 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:38.442 16:50:56 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:38.442 16:50:56 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:38.442 16:50:56 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:38.442 16:50:56 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59299 00:05:38.442 16:50:56 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59299 ']' 00:05:38.442 16:50:56 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59299 00:05:38.442 16:50:56 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:38.442 16:50:56 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.442 16:50:56 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59299 00:05:38.442 killing process with pid 59299 00:05:38.442 16:50:56 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.442 16:50:56 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.442 16:50:56 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59299' 00:05:38.442 16:50:56 app_cmdline -- common/autotest_common.sh@973 -- # kill 59299 00:05:38.442 16:50:56 app_cmdline -- common/autotest_common.sh@978 -- # wait 59299 00:05:38.701 00:05:38.701 real 0m1.717s 00:05:38.701 user 0m2.017s 00:05:38.701 sys 0m0.475s 00:05:38.701 16:50:57 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.701 ************************************ 00:05:38.701 END TEST app_cmdline 00:05:38.701 ************************************ 00:05:38.701 16:50:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:38.701 16:50:57 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:38.701 16:50:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.701 16:50:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.701 16:50:57 -- common/autotest_common.sh@10 -- # set +x 00:05:38.701 ************************************ 00:05:38.701 START TEST version 00:05:38.701 ************************************ 00:05:38.701 16:50:57 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:38.960 * Looking for test storage... 00:05:38.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:38.960 16:50:57 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:38.960 16:50:57 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:38.960 16:50:57 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:38.960 16:50:57 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:38.960 16:50:57 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.960 16:50:57 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.960 16:50:57 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.960 16:50:57 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.960 16:50:57 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.960 16:50:57 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.960 16:50:57 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.960 16:50:57 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.960 16:50:57 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.960 16:50:57 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.960 16:50:57 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.961 16:50:57 version -- scripts/common.sh@344 -- # case "$op" in 00:05:38.961 16:50:57 version -- scripts/common.sh@345 -- # : 1 00:05:38.961 16:50:57 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.961 16:50:57 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.961 16:50:57 version -- scripts/common.sh@365 -- # decimal 1 00:05:38.961 16:50:57 version -- scripts/common.sh@353 -- # local d=1 00:05:38.961 16:50:57 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.961 16:50:57 version -- scripts/common.sh@355 -- # echo 1 00:05:38.961 16:50:57 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.961 16:50:57 version -- scripts/common.sh@366 -- # decimal 2 00:05:38.961 16:50:57 version -- scripts/common.sh@353 -- # local d=2 00:05:38.961 16:50:57 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.961 16:50:57 version -- scripts/common.sh@355 -- # echo 2 00:05:38.961 16:50:57 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.961 16:50:57 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.961 16:50:57 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.961 16:50:57 version -- scripts/common.sh@368 -- # return 0 00:05:38.961 16:50:57 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.961 16:50:57 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:38.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.961 --rc genhtml_branch_coverage=1 00:05:38.961 --rc genhtml_function_coverage=1 00:05:38.961 --rc genhtml_legend=1 00:05:38.961 --rc geninfo_all_blocks=1 00:05:38.961 --rc geninfo_unexecuted_blocks=1 00:05:38.961 00:05:38.961 ' 00:05:38.961 16:50:57 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:38.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.961 --rc genhtml_branch_coverage=1 00:05:38.961 --rc genhtml_function_coverage=1 00:05:38.961 --rc genhtml_legend=1 00:05:38.961 --rc geninfo_all_blocks=1 00:05:38.961 --rc geninfo_unexecuted_blocks=1 00:05:38.961 00:05:38.961 ' 00:05:38.961 16:50:57 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:38.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.961 --rc genhtml_branch_coverage=1 00:05:38.961 --rc genhtml_function_coverage=1 00:05:38.961 --rc genhtml_legend=1 00:05:38.961 --rc geninfo_all_blocks=1 00:05:38.961 --rc geninfo_unexecuted_blocks=1 00:05:38.961 00:05:38.961 ' 00:05:38.961 16:50:57 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:38.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.961 --rc genhtml_branch_coverage=1 00:05:38.961 --rc genhtml_function_coverage=1 00:05:38.961 --rc genhtml_legend=1 00:05:38.961 --rc geninfo_all_blocks=1 00:05:38.961 --rc geninfo_unexecuted_blocks=1 00:05:38.961 00:05:38.961 ' 00:05:38.961 16:50:57 version -- app/version.sh@17 -- # get_header_version major 00:05:38.961 16:50:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:38.961 16:50:57 version -- app/version.sh@14 -- # cut -f2 00:05:38.961 16:50:57 version -- app/version.sh@14 -- # tr -d '"' 00:05:38.961 16:50:57 version -- app/version.sh@17 -- # major=25 00:05:38.961 16:50:57 version -- app/version.sh@18 -- # get_header_version minor 00:05:38.961 16:50:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:38.961 16:50:57 version -- app/version.sh@14 -- # tr -d '"' 00:05:38.961 16:50:57 version -- app/version.sh@14 -- # cut -f2 00:05:38.961 16:50:57 version -- app/version.sh@18 -- # minor=1 00:05:38.961 16:50:57 version -- app/version.sh@19 -- # get_header_version patch 00:05:38.961 16:50:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:38.961 16:50:57 version -- app/version.sh@14 -- # cut -f2 00:05:38.961 16:50:57 version -- app/version.sh@14 -- # tr -d '"' 00:05:38.961 16:50:57 version -- app/version.sh@19 -- # patch=0 00:05:38.961 16:50:57 version -- app/version.sh@20 -- # get_header_version suffix 00:05:38.961 16:50:57 version -- app/version.sh@14 -- # cut -f2 00:05:38.961 16:50:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:38.961 16:50:57 version -- app/version.sh@14 -- # tr -d '"' 00:05:38.961 16:50:57 version -- app/version.sh@20 -- # suffix=-pre 00:05:38.961 16:50:57 version -- app/version.sh@22 -- # version=25.1 00:05:38.961 16:50:57 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:38.961 16:50:57 version -- app/version.sh@28 -- # version=25.1rc0 00:05:38.961 16:50:57 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:38.961 16:50:57 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:38.961 16:50:57 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:38.961 16:50:57 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:38.961 00:05:38.961 real 0m0.228s 00:05:38.961 user 0m0.142s 00:05:38.961 sys 0m0.121s 00:05:38.961 16:50:57 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.961 ************************************ 00:05:38.961 END TEST version 00:05:38.961 ************************************ 00:05:38.961 16:50:57 version -- common/autotest_common.sh@10 -- # set +x 00:05:38.961 16:50:57 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:38.961 16:50:57 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:38.961 16:50:57 -- spdk/autotest.sh@194 -- # uname -s 00:05:38.961 16:50:57 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:38.961 16:50:57 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:38.961 16:50:57 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:05:38.961 16:50:57 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:05:38.961 16:50:57 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:38.961 16:50:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.961 16:50:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.961 16:50:57 -- common/autotest_common.sh@10 -- # set +x 00:05:38.961 ************************************ 00:05:38.961 START TEST spdk_dd 00:05:38.961 ************************************ 00:05:38.961 16:50:57 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:39.220 * Looking for test storage... 00:05:39.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:39.220 16:50:57 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:39.220 16:50:57 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:05:39.220 16:50:57 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:39.220 16:50:57 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:39.220 16:50:57 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.220 16:50:57 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.220 16:50:57 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.220 16:50:57 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.220 16:50:57 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.220 16:50:57 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.220 16:50:57 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.220 16:50:57 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.220 16:50:57 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.220 16:50:57 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.220 16:50:57 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.221 16:50:57 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:05:39.221 16:50:57 spdk_dd -- scripts/common.sh@345 -- # : 1 00:05:39.221 16:50:57 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.221 16:50:57 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.221 16:50:57 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:05:39.221 16:50:57 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:05:39.221 16:50:57 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.221 16:50:57 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:05:39.221 16:50:57 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.221 16:50:57 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:05:39.221 16:50:57 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:05:39.221 16:50:57 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.221 16:50:57 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:05:39.221 16:50:57 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.221 16:50:57 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.221 16:50:57 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.221 16:50:57 spdk_dd -- scripts/common.sh@368 -- # return 0 00:05:39.221 16:50:57 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.221 16:50:57 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:39.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.221 --rc genhtml_branch_coverage=1 00:05:39.221 --rc genhtml_function_coverage=1 00:05:39.221 --rc genhtml_legend=1 00:05:39.221 --rc geninfo_all_blocks=1 00:05:39.221 --rc geninfo_unexecuted_blocks=1 00:05:39.221 00:05:39.221 ' 00:05:39.221 16:50:57 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:39.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.221 --rc genhtml_branch_coverage=1 00:05:39.221 --rc genhtml_function_coverage=1 00:05:39.221 --rc genhtml_legend=1 00:05:39.221 --rc geninfo_all_blocks=1 00:05:39.221 --rc geninfo_unexecuted_blocks=1 00:05:39.221 00:05:39.221 ' 00:05:39.221 16:50:57 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:39.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.221 --rc genhtml_branch_coverage=1 00:05:39.221 --rc genhtml_function_coverage=1 00:05:39.221 --rc genhtml_legend=1 00:05:39.221 --rc geninfo_all_blocks=1 00:05:39.221 --rc geninfo_unexecuted_blocks=1 00:05:39.221 00:05:39.221 ' 00:05:39.221 16:50:57 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:39.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.221 --rc genhtml_branch_coverage=1 00:05:39.221 --rc genhtml_function_coverage=1 00:05:39.221 --rc genhtml_legend=1 00:05:39.221 --rc geninfo_all_blocks=1 00:05:39.221 --rc geninfo_unexecuted_blocks=1 00:05:39.221 00:05:39.221 ' 00:05:39.221 16:50:57 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:39.221 16:50:57 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:05:39.221 16:50:57 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:39.221 16:50:57 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:39.221 16:50:57 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:39.221 16:50:57 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.221 16:50:57 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.221 16:50:57 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.221 16:50:57 spdk_dd -- paths/export.sh@5 -- # export PATH 00:05:39.221 16:50:57 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.221 16:50:57 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:39.479 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:39.738 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:39.738 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:39.738 16:50:58 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:39.738 16:50:58 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:39.738 16:50:58 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:05:39.738 16:50:58 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:05:39.738 16:50:58 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:05:39.738 16:50:58 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:39.738 16:50:58 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:05:39.738 16:50:58 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:05:39.738 16:50:58 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:05:39.738 16:50:58 spdk_dd -- scripts/common.sh@233 -- # local class 00:05:39.738 16:50:58 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:05:39.738 16:50:58 spdk_dd -- scripts/common.sh@235 -- # local progif 00:05:39.738 16:50:58 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:05:39.738 16:50:58 spdk_dd -- scripts/common.sh@236 -- # class=01 00:05:39.738 16:50:58 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:05:39.738 16:50:58 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:05:39.738 16:50:58 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:05:39.738 16:50:58 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:05:39.738 16:50:58 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:05:39.738 16:50:58 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:05:39.738 16:50:58 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:05:39.738 16:50:58 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:05:39.739 16:50:58 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:39.739 16:50:58 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@139 -- # local lib 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:05:39.739 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:39.740 * spdk_dd linked to liburing 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:39.740 16:50:58 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:05:39.740 16:50:58 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:05:39.741 16:50:58 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:05:39.741 16:50:58 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:05:39.741 16:50:58 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:05:39.741 16:50:58 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:05:39.741 16:50:58 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:05:39.741 16:50:58 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:39.741 16:50:58 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:05:39.741 16:50:58 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:05:39.741 16:50:58 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:05:39.741 16:50:58 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:05:39.741 16:50:58 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:05:39.741 16:50:58 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:05:39.741 16:50:58 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:05:39.741 16:50:58 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:05:39.741 16:50:58 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:05:39.741 16:50:58 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:05:39.741 16:50:58 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:05:39.741 16:50:58 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:39.741 16:50:58 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:05:39.741 16:50:58 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:05:39.741 16:50:58 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:05:39.741 16:50:58 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:05:39.741 16:50:58 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:05:39.741 16:50:58 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:05:39.741 16:50:58 spdk_dd -- dd/common.sh@153 -- # return 0 00:05:39.741 16:50:58 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:39.741 16:50:58 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:39.741 16:50:58 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:39.741 16:50:58 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.741 16:50:58 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:39.741 ************************************ 00:05:39.741 START TEST spdk_dd_basic_rw 00:05:39.741 ************************************ 00:05:39.741 16:50:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:39.741 * Looking for test storage... 00:05:39.741 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:39.741 16:50:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:39.741 16:50:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:05:39.741 16:50:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.000 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:40.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.001 --rc genhtml_branch_coverage=1 00:05:40.001 --rc genhtml_function_coverage=1 00:05:40.001 --rc genhtml_legend=1 00:05:40.001 --rc geninfo_all_blocks=1 00:05:40.001 --rc geninfo_unexecuted_blocks=1 00:05:40.001 00:05:40.001 ' 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:40.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.001 --rc genhtml_branch_coverage=1 00:05:40.001 --rc genhtml_function_coverage=1 00:05:40.001 --rc genhtml_legend=1 00:05:40.001 --rc geninfo_all_blocks=1 00:05:40.001 --rc geninfo_unexecuted_blocks=1 00:05:40.001 00:05:40.001 ' 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:40.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.001 --rc genhtml_branch_coverage=1 00:05:40.001 --rc genhtml_function_coverage=1 00:05:40.001 --rc genhtml_legend=1 00:05:40.001 --rc geninfo_all_blocks=1 00:05:40.001 --rc geninfo_unexecuted_blocks=1 00:05:40.001 00:05:40.001 ' 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:40.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.001 --rc genhtml_branch_coverage=1 00:05:40.001 --rc genhtml_function_coverage=1 00:05:40.001 --rc genhtml_legend=1 00:05:40.001 --rc geninfo_all_blocks=1 00:05:40.001 --rc geninfo_unexecuted_blocks=1 00:05:40.001 00:05:40.001 ' 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:05:40.001 16:50:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:05:40.264 16:50:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:40.264 16:50:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:05:40.265 16:50:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:40.265 16:50:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:05:40.265 16:50:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:05:40.265 16:50:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:40.265 16:50:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:40.265 16:50:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:05:40.265 16:50:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:40.265 16:50:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:05:40.265 16:50:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.265 16:50:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:40.265 16:50:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:40.265 16:50:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:40.265 ************************************ 00:05:40.265 START TEST dd_bs_lt_native_bs 00:05:40.265 ************************************ 00:05:40.265 16:50:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:40.265 16:50:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:05:40.265 16:50:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:40.265 16:50:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:40.265 16:50:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.265 16:50:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:40.265 16:50:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.265 16:50:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:40.265 16:50:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.265 16:50:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:40.265 16:50:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:40.265 16:50:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:40.265 { 00:05:40.265 "subsystems": [ 00:05:40.265 { 00:05:40.265 "subsystem": "bdev", 00:05:40.265 "config": [ 00:05:40.265 { 00:05:40.265 "params": { 00:05:40.265 "trtype": "pcie", 00:05:40.265 "traddr": "0000:00:10.0", 00:05:40.265 "name": "Nvme0" 00:05:40.265 }, 00:05:40.265 "method": "bdev_nvme_attach_controller" 00:05:40.265 }, 00:05:40.265 { 00:05:40.265 "method": "bdev_wait_for_examine" 00:05:40.265 } 00:05:40.265 ] 00:05:40.265 } 00:05:40.265 ] 00:05:40.265 } 00:05:40.265 [2024-11-20 16:50:58.616733] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:40.265 [2024-11-20 16:50:58.616874] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59643 ] 00:05:40.265 [2024-11-20 16:50:58.763258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.524 [2024-11-20 16:50:58.818456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.524 [2024-11-20 16:50:58.876745] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.524 [2024-11-20 16:50:58.992331] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:40.524 [2024-11-20 16:50:58.992431] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.796 [2024-11-20 16:50:59.129643] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:40.796 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:05:40.796 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:40.796 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:40.797 00:05:40.797 real 0m0.633s 00:05:40.797 user 0m0.424s 00:05:40.797 sys 0m0.165s 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:05:40.797 ************************************ 00:05:40.797 END TEST dd_bs_lt_native_bs 00:05:40.797 ************************************ 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:40.797 ************************************ 00:05:40.797 START TEST dd_rw 00:05:40.797 ************************************ 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:40.797 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:41.733 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:05:41.733 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:41.733 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:41.733 16:50:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:41.733 [2024-11-20 16:50:59.952716] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:41.733 [2024-11-20 16:50:59.952816] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59674 ] 00:05:41.733 { 00:05:41.733 "subsystems": [ 00:05:41.733 { 00:05:41.733 "subsystem": "bdev", 00:05:41.733 "config": [ 00:05:41.733 { 00:05:41.733 "params": { 00:05:41.733 "trtype": "pcie", 00:05:41.733 "traddr": "0000:00:10.0", 00:05:41.733 "name": "Nvme0" 00:05:41.733 }, 00:05:41.733 "method": "bdev_nvme_attach_controller" 00:05:41.733 }, 00:05:41.733 { 00:05:41.733 "method": "bdev_wait_for_examine" 00:05:41.733 } 00:05:41.733 ] 00:05:41.733 } 00:05:41.733 ] 00:05:41.733 } 00:05:41.733 [2024-11-20 16:51:00.095005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.733 [2024-11-20 16:51:00.157232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.733 [2024-11-20 16:51:00.217881] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.991  [2024-11-20T16:51:00.527Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:41.991 00:05:42.250 16:51:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:42.250 16:51:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:05:42.250 16:51:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:42.250 16:51:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:42.250 [2024-11-20 16:51:00.585162] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:42.250 [2024-11-20 16:51:00.585778] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59693 ] 00:05:42.250 { 00:05:42.250 "subsystems": [ 00:05:42.250 { 00:05:42.250 "subsystem": "bdev", 00:05:42.250 "config": [ 00:05:42.250 { 00:05:42.250 "params": { 00:05:42.250 "trtype": "pcie", 00:05:42.250 "traddr": "0000:00:10.0", 00:05:42.250 "name": "Nvme0" 00:05:42.250 }, 00:05:42.250 "method": "bdev_nvme_attach_controller" 00:05:42.250 }, 00:05:42.250 { 00:05:42.250 "method": "bdev_wait_for_examine" 00:05:42.250 } 00:05:42.250 ] 00:05:42.250 } 00:05:42.250 ] 00:05:42.250 } 00:05:42.250 [2024-11-20 16:51:00.732789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.509 [2024-11-20 16:51:00.791388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.509 [2024-11-20 16:51:00.847170] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.509  [2024-11-20T16:51:01.303Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:42.767 00:05:42.767 16:51:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:42.767 16:51:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:42.767 16:51:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:42.767 16:51:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:42.767 16:51:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:42.767 16:51:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:42.767 16:51:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:42.767 16:51:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:42.767 16:51:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:42.767 16:51:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:42.767 16:51:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:42.767 { 00:05:42.767 "subsystems": [ 00:05:42.767 { 00:05:42.767 "subsystem": "bdev", 00:05:42.767 "config": [ 00:05:42.767 { 00:05:42.767 "params": { 00:05:42.767 "trtype": "pcie", 00:05:42.767 "traddr": "0000:00:10.0", 00:05:42.767 "name": "Nvme0" 00:05:42.767 }, 00:05:42.767 "method": "bdev_nvme_attach_controller" 00:05:42.767 }, 00:05:42.767 { 00:05:42.767 "method": "bdev_wait_for_examine" 00:05:42.767 } 00:05:42.767 ] 00:05:42.767 } 00:05:42.767 ] 00:05:42.767 } 00:05:42.767 [2024-11-20 16:51:01.229920] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:42.767 [2024-11-20 16:51:01.230014] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59714 ] 00:05:43.068 [2024-11-20 16:51:01.377978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.068 [2024-11-20 16:51:01.438133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.068 [2024-11-20 16:51:01.493257] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:43.335  [2024-11-20T16:51:01.871Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:43.335 00:05:43.335 16:51:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:43.335 16:51:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:43.335 16:51:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:43.335 16:51:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:43.335 16:51:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:43.335 16:51:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:43.335 16:51:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:43.903 16:51:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:05:43.903 16:51:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:43.903 16:51:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:43.903 16:51:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:44.162 [2024-11-20 16:51:02.456292] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:44.162 [2024-11-20 16:51:02.456409] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59733 ] 00:05:44.162 { 00:05:44.162 "subsystems": [ 00:05:44.162 { 00:05:44.162 "subsystem": "bdev", 00:05:44.162 "config": [ 00:05:44.162 { 00:05:44.162 "params": { 00:05:44.162 "trtype": "pcie", 00:05:44.162 "traddr": "0000:00:10.0", 00:05:44.162 "name": "Nvme0" 00:05:44.162 }, 00:05:44.162 "method": "bdev_nvme_attach_controller" 00:05:44.162 }, 00:05:44.162 { 00:05:44.162 "method": "bdev_wait_for_examine" 00:05:44.162 } 00:05:44.162 ] 00:05:44.162 } 00:05:44.162 ] 00:05:44.162 } 00:05:44.162 [2024-11-20 16:51:02.598097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.162 [2024-11-20 16:51:02.649511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.421 [2024-11-20 16:51:02.703620] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:44.421  [2024-11-20T16:51:03.216Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:44.680 00:05:44.680 16:51:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:05:44.680 16:51:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:44.680 16:51:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:44.680 16:51:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:44.680 [2024-11-20 16:51:03.055861] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:44.680 [2024-11-20 16:51:03.055943] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59741 ] 00:05:44.680 { 00:05:44.680 "subsystems": [ 00:05:44.680 { 00:05:44.680 "subsystem": "bdev", 00:05:44.680 "config": [ 00:05:44.680 { 00:05:44.680 "params": { 00:05:44.680 "trtype": "pcie", 00:05:44.680 "traddr": "0000:00:10.0", 00:05:44.680 "name": "Nvme0" 00:05:44.680 }, 00:05:44.680 "method": "bdev_nvme_attach_controller" 00:05:44.680 }, 00:05:44.680 { 00:05:44.680 "method": "bdev_wait_for_examine" 00:05:44.680 } 00:05:44.680 ] 00:05:44.680 } 00:05:44.680 ] 00:05:44.680 } 00:05:44.680 [2024-11-20 16:51:03.200317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.938 [2024-11-20 16:51:03.253900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.938 [2024-11-20 16:51:03.312333] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:44.938  [2024-11-20T16:51:03.733Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:45.197 00:05:45.197 16:51:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:45.197 16:51:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:45.197 16:51:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:45.197 16:51:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:45.197 16:51:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:45.197 16:51:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:45.197 16:51:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:45.197 16:51:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:45.197 16:51:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:45.197 16:51:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:45.197 16:51:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:45.197 { 00:05:45.197 "subsystems": [ 00:05:45.197 { 00:05:45.197 "subsystem": "bdev", 00:05:45.197 "config": [ 00:05:45.197 { 00:05:45.197 "params": { 00:05:45.197 "trtype": "pcie", 00:05:45.197 "traddr": "0000:00:10.0", 00:05:45.197 "name": "Nvme0" 00:05:45.197 }, 00:05:45.197 "method": "bdev_nvme_attach_controller" 00:05:45.197 }, 00:05:45.197 { 00:05:45.197 "method": "bdev_wait_for_examine" 00:05:45.197 } 00:05:45.197 ] 00:05:45.197 } 00:05:45.197 ] 00:05:45.197 } 00:05:45.197 [2024-11-20 16:51:03.685297] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:45.197 [2024-11-20 16:51:03.685437] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59762 ] 00:05:45.456 [2024-11-20 16:51:03.832426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.456 [2024-11-20 16:51:03.877383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.457 [2024-11-20 16:51:03.932562] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.715  [2024-11-20T16:51:04.251Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:45.715 00:05:45.715 16:51:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:45.974 16:51:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:45.974 16:51:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:45.974 16:51:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:45.974 16:51:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:45.974 16:51:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:45.974 16:51:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:45.974 16:51:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:46.542 16:51:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:05:46.542 16:51:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:46.542 16:51:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:46.542 16:51:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:46.542 [2024-11-20 16:51:04.881306] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:46.542 [2024-11-20 16:51:04.881416] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59781 ] 00:05:46.542 { 00:05:46.542 "subsystems": [ 00:05:46.542 { 00:05:46.542 "subsystem": "bdev", 00:05:46.542 "config": [ 00:05:46.542 { 00:05:46.542 "params": { 00:05:46.542 "trtype": "pcie", 00:05:46.542 "traddr": "0000:00:10.0", 00:05:46.542 "name": "Nvme0" 00:05:46.542 }, 00:05:46.542 "method": "bdev_nvme_attach_controller" 00:05:46.542 }, 00:05:46.542 { 00:05:46.542 "method": "bdev_wait_for_examine" 00:05:46.542 } 00:05:46.542 ] 00:05:46.542 } 00:05:46.542 ] 00:05:46.542 } 00:05:46.542 [2024-11-20 16:51:05.031684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.801 [2024-11-20 16:51:05.089629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.801 [2024-11-20 16:51:05.149175] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:46.801  [2024-11-20T16:51:05.596Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:47.060 00:05:47.060 16:51:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:05:47.060 16:51:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:47.060 16:51:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:47.060 16:51:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:47.060 { 00:05:47.060 "subsystems": [ 00:05:47.060 { 00:05:47.060 "subsystem": "bdev", 00:05:47.060 "config": [ 00:05:47.060 { 00:05:47.060 "params": { 00:05:47.060 "trtype": "pcie", 00:05:47.060 "traddr": "0000:00:10.0", 00:05:47.060 "name": "Nvme0" 00:05:47.060 }, 00:05:47.060 "method": "bdev_nvme_attach_controller" 00:05:47.060 }, 00:05:47.060 { 00:05:47.060 "method": "bdev_wait_for_examine" 00:05:47.060 } 00:05:47.060 ] 00:05:47.060 } 00:05:47.060 ] 00:05:47.060 } 00:05:47.060 [2024-11-20 16:51:05.521751] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:47.061 [2024-11-20 16:51:05.521894] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59800 ] 00:05:47.319 [2024-11-20 16:51:05.670767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.319 [2024-11-20 16:51:05.730653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.319 [2024-11-20 16:51:05.784914] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.588  [2024-11-20T16:51:06.124Z] Copying: 56/56 [kB] (average 27 MBps) 00:05:47.588 00:05:47.588 16:51:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:47.588 16:51:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:47.588 16:51:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:47.588 16:51:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:47.588 16:51:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:47.588 16:51:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:47.588 16:51:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:47.588 16:51:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:47.588 16:51:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:47.588 16:51:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:47.588 16:51:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:47.910 { 00:05:47.910 "subsystems": [ 00:05:47.910 { 00:05:47.910 "subsystem": "bdev", 00:05:47.910 "config": [ 00:05:47.910 { 00:05:47.910 "params": { 00:05:47.910 "trtype": "pcie", 00:05:47.910 "traddr": "0000:00:10.0", 00:05:47.910 "name": "Nvme0" 00:05:47.910 }, 00:05:47.910 "method": "bdev_nvme_attach_controller" 00:05:47.910 }, 00:05:47.910 { 00:05:47.910 "method": "bdev_wait_for_examine" 00:05:47.910 } 00:05:47.910 ] 00:05:47.910 } 00:05:47.910 ] 00:05:47.911 } 00:05:47.911 [2024-11-20 16:51:06.151228] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:47.911 [2024-11-20 16:51:06.151320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59810 ] 00:05:47.911 [2024-11-20 16:51:06.296286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.911 [2024-11-20 16:51:06.354109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.911 [2024-11-20 16:51:06.410791] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.175  [2024-11-20T16:51:06.969Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:48.433 00:05:48.433 16:51:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:48.433 16:51:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:48.433 16:51:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:48.433 16:51:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:48.433 16:51:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:48.433 16:51:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:48.433 16:51:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:49.000 16:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:05:49.000 16:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:49.000 16:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:49.000 16:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:49.000 [2024-11-20 16:51:07.392000] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:49.000 [2024-11-20 16:51:07.392134] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59840 ] 00:05:49.000 { 00:05:49.000 "subsystems": [ 00:05:49.000 { 00:05:49.000 "subsystem": "bdev", 00:05:49.000 "config": [ 00:05:49.000 { 00:05:49.000 "params": { 00:05:49.000 "trtype": "pcie", 00:05:49.000 "traddr": "0000:00:10.0", 00:05:49.000 "name": "Nvme0" 00:05:49.000 }, 00:05:49.000 "method": "bdev_nvme_attach_controller" 00:05:49.000 }, 00:05:49.000 { 00:05:49.000 "method": "bdev_wait_for_examine" 00:05:49.000 } 00:05:49.000 ] 00:05:49.000 } 00:05:49.000 ] 00:05:49.000 } 00:05:49.259 [2024-11-20 16:51:07.538091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.259 [2024-11-20 16:51:07.583394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.259 [2024-11-20 16:51:07.637446] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:49.259  [2024-11-20T16:51:08.054Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:49.518 00:05:49.518 16:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:05:49.518 16:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:49.518 16:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:49.518 16:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:49.518 [2024-11-20 16:51:07.987114] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:49.518 [2024-11-20 16:51:07.987207] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59848 ] 00:05:49.518 { 00:05:49.518 "subsystems": [ 00:05:49.518 { 00:05:49.518 "subsystem": "bdev", 00:05:49.518 "config": [ 00:05:49.518 { 00:05:49.518 "params": { 00:05:49.518 "trtype": "pcie", 00:05:49.518 "traddr": "0000:00:10.0", 00:05:49.518 "name": "Nvme0" 00:05:49.518 }, 00:05:49.518 "method": "bdev_nvme_attach_controller" 00:05:49.518 }, 00:05:49.518 { 00:05:49.518 "method": "bdev_wait_for_examine" 00:05:49.518 } 00:05:49.518 ] 00:05:49.518 } 00:05:49.518 ] 00:05:49.518 } 00:05:49.777 [2024-11-20 16:51:08.131083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.777 [2024-11-20 16:51:08.190030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.777 [2024-11-20 16:51:08.246818] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.036  [2024-11-20T16:51:08.572Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:50.036 00:05:50.036 16:51:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:50.036 16:51:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:50.036 16:51:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:50.036 16:51:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:50.036 16:51:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:50.036 16:51:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:50.036 16:51:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:50.296 16:51:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:50.296 16:51:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:50.296 16:51:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:50.296 16:51:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:50.296 [2024-11-20 16:51:08.625883] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:50.296 [2024-11-20 16:51:08.625976] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59869 ] 00:05:50.296 { 00:05:50.296 "subsystems": [ 00:05:50.296 { 00:05:50.296 "subsystem": "bdev", 00:05:50.296 "config": [ 00:05:50.296 { 00:05:50.296 "params": { 00:05:50.296 "trtype": "pcie", 00:05:50.296 "traddr": "0000:00:10.0", 00:05:50.296 "name": "Nvme0" 00:05:50.296 }, 00:05:50.296 "method": "bdev_nvme_attach_controller" 00:05:50.296 }, 00:05:50.296 { 00:05:50.296 "method": "bdev_wait_for_examine" 00:05:50.296 } 00:05:50.296 ] 00:05:50.296 } 00:05:50.296 ] 00:05:50.296 } 00:05:50.296 [2024-11-20 16:51:08.772021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.296 [2024-11-20 16:51:08.816798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.554 [2024-11-20 16:51:08.870228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.554  [2024-11-20T16:51:09.349Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:50.813 00:05:50.813 16:51:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:50.813 16:51:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:50.813 16:51:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:50.813 16:51:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:50.813 16:51:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:50.813 16:51:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:50.813 16:51:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:50.813 16:51:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:51.381 16:51:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:05:51.381 16:51:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:51.381 16:51:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:51.381 16:51:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:51.381 [2024-11-20 16:51:09.711281] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:51.381 { 00:05:51.381 "subsystems": [ 00:05:51.381 { 00:05:51.381 "subsystem": "bdev", 00:05:51.381 "config": [ 00:05:51.381 { 00:05:51.381 "params": { 00:05:51.381 "trtype": "pcie", 00:05:51.381 "traddr": "0000:00:10.0", 00:05:51.381 "name": "Nvme0" 00:05:51.381 }, 00:05:51.381 "method": "bdev_nvme_attach_controller" 00:05:51.381 }, 00:05:51.381 { 00:05:51.381 "method": "bdev_wait_for_examine" 00:05:51.381 } 00:05:51.381 ] 00:05:51.381 } 00:05:51.381 ] 00:05:51.382 } 00:05:51.382 [2024-11-20 16:51:09.711388] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59888 ] 00:05:51.382 [2024-11-20 16:51:09.857179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.640 [2024-11-20 16:51:09.917076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.640 [2024-11-20 16:51:09.972060] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:51.640  [2024-11-20T16:51:10.446Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:51.910 00:05:51.910 16:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:05:51.910 16:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:51.910 16:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:51.910 16:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:51.910 { 00:05:51.910 "subsystems": [ 00:05:51.910 { 00:05:51.910 "subsystem": "bdev", 00:05:51.910 "config": [ 00:05:51.910 { 00:05:51.910 "params": { 00:05:51.910 "trtype": "pcie", 00:05:51.910 "traddr": "0000:00:10.0", 00:05:51.910 "name": "Nvme0" 00:05:51.910 }, 00:05:51.910 "method": "bdev_nvme_attach_controller" 00:05:51.910 }, 00:05:51.910 { 00:05:51.910 "method": "bdev_wait_for_examine" 00:05:51.910 } 00:05:51.910 ] 00:05:51.910 } 00:05:51.910 ] 00:05:51.910 } 00:05:51.910 [2024-11-20 16:51:10.356397] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:51.910 [2024-11-20 16:51:10.356501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59897 ] 00:05:52.195 [2024-11-20 16:51:10.507128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.195 [2024-11-20 16:51:10.588374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.195 [2024-11-20 16:51:10.649334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.454  [2024-11-20T16:51:10.990Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:52.454 00:05:52.454 16:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:52.454 16:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:52.454 16:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:52.454 16:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:52.454 16:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:52.454 16:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:52.454 16:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:52.454 16:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:52.454 16:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:52.454 16:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:52.454 16:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:52.713 { 00:05:52.713 "subsystems": [ 00:05:52.713 { 00:05:52.713 "subsystem": "bdev", 00:05:52.713 "config": [ 00:05:52.713 { 00:05:52.713 "params": { 00:05:52.713 "trtype": "pcie", 00:05:52.713 "traddr": "0000:00:10.0", 00:05:52.713 "name": "Nvme0" 00:05:52.713 }, 00:05:52.713 "method": "bdev_nvme_attach_controller" 00:05:52.713 }, 00:05:52.713 { 00:05:52.713 "method": "bdev_wait_for_examine" 00:05:52.713 } 00:05:52.713 ] 00:05:52.713 } 00:05:52.713 ] 00:05:52.713 } 00:05:52.713 [2024-11-20 16:51:11.043752] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:52.713 [2024-11-20 16:51:11.043885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59917 ] 00:05:52.713 [2024-11-20 16:51:11.195616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.972 [2024-11-20 16:51:11.262598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.972 [2024-11-20 16:51:11.323867] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.972  [2024-11-20T16:51:11.767Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:53.231 00:05:53.231 16:51:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:53.231 16:51:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:53.231 16:51:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:53.231 16:51:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:53.231 16:51:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:53.231 16:51:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:53.232 16:51:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:53.799 16:51:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:05:53.799 16:51:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:53.799 16:51:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:53.799 16:51:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:53.799 [2024-11-20 16:51:12.236190] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:53.799 [2024-11-20 16:51:12.236306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59938 ] 00:05:53.799 { 00:05:53.799 "subsystems": [ 00:05:53.799 { 00:05:53.799 "subsystem": "bdev", 00:05:53.799 "config": [ 00:05:53.799 { 00:05:53.799 "params": { 00:05:53.799 "trtype": "pcie", 00:05:53.799 "traddr": "0000:00:10.0", 00:05:53.799 "name": "Nvme0" 00:05:53.799 }, 00:05:53.799 "method": "bdev_nvme_attach_controller" 00:05:53.799 }, 00:05:53.799 { 00:05:53.799 "method": "bdev_wait_for_examine" 00:05:53.799 } 00:05:53.799 ] 00:05:53.799 } 00:05:53.799 ] 00:05:53.799 } 00:05:54.059 [2024-11-20 16:51:12.380993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.059 [2024-11-20 16:51:12.437624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.059 [2024-11-20 16:51:12.493186] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.318  [2024-11-20T16:51:12.854Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:54.318 00:05:54.318 16:51:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:54.318 16:51:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:05:54.318 16:51:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:54.318 16:51:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:54.577 { 00:05:54.577 "subsystems": [ 00:05:54.577 { 00:05:54.577 "subsystem": "bdev", 00:05:54.577 "config": [ 00:05:54.577 { 00:05:54.577 "params": { 00:05:54.577 "trtype": "pcie", 00:05:54.577 "traddr": "0000:00:10.0", 00:05:54.577 "name": "Nvme0" 00:05:54.577 }, 00:05:54.577 "method": "bdev_nvme_attach_controller" 00:05:54.577 }, 00:05:54.577 { 00:05:54.577 "method": "bdev_wait_for_examine" 00:05:54.577 } 00:05:54.577 ] 00:05:54.577 } 00:05:54.577 ] 00:05:54.577 } 00:05:54.577 [2024-11-20 16:51:12.864809] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:54.577 [2024-11-20 16:51:12.864941] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59957 ] 00:05:54.577 [2024-11-20 16:51:13.013752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.577 [2024-11-20 16:51:13.075497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.836 [2024-11-20 16:51:13.131602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.836  [2024-11-20T16:51:13.631Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:55.095 00:05:55.095 16:51:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:55.095 16:51:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:55.095 16:51:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:55.095 16:51:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:55.095 16:51:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:55.095 16:51:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:55.095 16:51:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:55.095 16:51:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:55.095 16:51:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:55.095 16:51:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:55.095 16:51:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:55.095 [2024-11-20 16:51:13.510316] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:55.096 [2024-11-20 16:51:13.510433] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59973 ] 00:05:55.096 { 00:05:55.096 "subsystems": [ 00:05:55.096 { 00:05:55.096 "subsystem": "bdev", 00:05:55.096 "config": [ 00:05:55.096 { 00:05:55.096 "params": { 00:05:55.096 "trtype": "pcie", 00:05:55.096 "traddr": "0000:00:10.0", 00:05:55.096 "name": "Nvme0" 00:05:55.096 }, 00:05:55.096 "method": "bdev_nvme_attach_controller" 00:05:55.096 }, 00:05:55.096 { 00:05:55.096 "method": "bdev_wait_for_examine" 00:05:55.096 } 00:05:55.096 ] 00:05:55.096 } 00:05:55.096 ] 00:05:55.096 } 00:05:55.354 [2024-11-20 16:51:13.659995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.354 [2024-11-20 16:51:13.721478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.354 [2024-11-20 16:51:13.776851] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.613  [2024-11-20T16:51:14.149Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:55.613 00:05:55.613 00:05:55.613 real 0m14.834s 00:05:55.613 user 0m10.751s 00:05:55.613 sys 0m5.698s 00:05:55.613 16:51:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.613 16:51:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:55.613 ************************************ 00:05:55.613 END TEST dd_rw 00:05:55.613 ************************************ 00:05:55.613 16:51:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:05:55.613 16:51:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.613 16:51:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.613 16:51:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:55.613 ************************************ 00:05:55.613 START TEST dd_rw_offset 00:05:55.613 ************************************ 00:05:55.613 16:51:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:05:55.613 16:51:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:05:55.613 16:51:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:05:55.873 16:51:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:05:55.873 16:51:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:55.873 16:51:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:05:55.874 16:51:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=n07vdt37lcpqprxnr90rurgp957y8rtxdmqxs60ha1mofdvucg89b9dzs8b4xy334s9isvmgem6khi8wfo9jom7yxtmwfn4e89u8n29wwqf4vbt79g8yxpur0ry7cpyasoq12nclv5kg5rcxz6c5jq83vvu0rl3d6eaggx89wti1fsqwl722k0wiooaejkv60jfqu69hp2mhvv2gglf42ger880d5kj9kuoc9q6vzugvw5ak4y2e0sxz739ollb9cjmfnbm7uhb8emkaozpdxsehxcrte1vywuxzepgjcz0isfs6ccj14r7bw4jt5qlcu0qsxmtgxv3tk8um3oaws1lac8p8q2fpwu0htv2miqcseektodhkdkcj0k8gw4h6utw02vhh43p9uxuqfcsezsfjh7c4d384ci3888lpyfbairayv9f77gejmy2emmn9ycutp2cpfaxyvu3n7553oxbn25tb5swd05vrxitl1ux5njm42badm4srco6vbl2jdlkaolqdevm8lmbwfr2ie9smdpwyrtpprdzfen7fqvkbsq2gy7cviv1ylgxg9h17wq5n2h7y1djz9909km3hcivdswxjlic6shz92t2ysskf6upmrsikk6wvnl8eajpqri274gpjie801qf58gwssebx8hrojrtlrzlaylqqwgdhze3jeou44ejbb4x5oi9ljqenpm5hg4ik6pryppo3d13w4hgonu1o1l00k2ayi6z2whmee80y59os0rwz26benidsdlbhjszyliuybo37zwoiq4j2pwimifxgds5pkmcpbvfdytvha2a380eeimvfji5fg8oq5ca58lfqsal28m9eapp61cil881yc6vfqvaaa4g6n6dho5i5uki4lchq1ofcbm1pw7xr9lf2hztcexdmm9ccfas47yimcl5p6uuohg9v9rv9shn4stzfye97iersru4maq0dvpbyn5j9asddlrfhxnof8v1q5jq83ojwfr1kjz93wag7fmcwvmehm5wapq2o4w3cru2frb3hmnrn63bjvz8ofvn2ja0htdq4npcj8ckzjbxsnw5cgsyy6napsozk4qsck74y8ivhyr2g00rwewe2ouydeg96ipgux9hv7arig496zhx4ajefpguv04al1pjyrd3pgkqzfmu2iok52z0krlef5rvzte6ccct1ogfumetwirq0dcrnsrii51w6bw81h76u149y3w088kmnk4w5qtzqq2cxjmi2wfhfkooahpgagjomtrp8py2u62wl4olbx6ouhpeuy6tds9ey6dxrutspxk1ogc5b7mzlg49gdlhb94u8todfeszymz3x84qdsdzvsfyw4d3pnvypo4egpbftdkjv1sg7sy7x70n3gi5zvidybyl68exd3igc6oa0c0ehytogp9209ywzlikk37ho0w4oxv6nuxgju580a1zgd5k2qrhehq041odf2nbe1ndv3erg5zz0kru0hzfn9ba5d29fs0aq3dvk0shmw8hz4wzidrvgzdijm6m969q63ggxa0n38c397tei134t7lsk4zy40s7l897jlvxlrdxdq0w469jn1mw2dghzdzdv6xaesfuhnzhhxp6ne2x9wcy70xdw36osfzozgbb4lht6vaub2t6tjwtwrxgi6cjo1mia6y6xym428d45e1l86y28x9mhbrjuflifrei87un12p8qvxmrkzs8k9awfixcvmszqy8msxe8er9zsviext0bn5ev2tahefxvblclc6me75kq8ui1404xl8ycanf3y7s2uzdutn2qoes0ditjwpxmoas3rcmsw7bkrd75g6secg925d91bjds13emoh1uqolocm0j7bypv6h3jvlulp076k92893h93ip0rkgjxqze8m3nox1imaarliv28hwbkevjuzxazzbmw57hjot3nqfwz9ial6zjdxg6twf3v49cr43dwasma857epddcu7ns7ikrc4lwxhgb0tg6qalha47s15x6kcdm18534hyk1hd0sj7ofgeb9j038a4pxfhxfodfq438rjihhugz4nr4vlif0hm9zaruzu32m3355muz4xpdddi0254jonabtt2k2oqnko08uddxjho6j6nxo6bm8675pvyx7mdav2obd1ymm5heyceefc0wp78txzpypbg4hh6hgzuo5nxrkgp2bygtr337ixy0bi78qpdnzyswqxmxvn8v4d727tkgffeyjkyaz105of6nxsp1vcojqugix86ri7yjim3tnzlm8xvvyy74klwfidq24sfak8veyxme1kg0jq2m8pj69x4l977rjr5hz28pxe4hdibb3wd1sydzxivihgq3niyr7e9etk7ntg10be9yxwae4j2losvd84akzrgojm300f2259my4fieaud7wji92kcr0cy8m2ntabxxje9urpv4e4t3tpn7zrhv2ytt4bym6dzc5v5khvjom7iifp2gdb28biwamzcr96xoqcgjnsmlnfwwio9t7prgh4000rptx93a96jyb1jncz3rytmr9vr46tepb3nr2kugu1fiiyzh8soin1gkwbk6e4upk8ggx755n9kby89qwajawmkezz39aw16etjn6jho7itnivai3nyvndwabrty6z9fqwbrri0ivthukcnd7p9jemt37is7uvare630dxcox7n4o7wnbyaylt57pvdj2ljezjcj53vpm234f84fri0l4a3k1hx2h34q2hxszujmw2ltvt3ksel7781rf17u2rprm9x8iuswvu4muy8juwwvqs9dodnmr9g1dl6ekusfaqyht8uyvg8i3tsxnwvay4rhsvtv97okhdifpgzrpvxj77y2g2rss6t4s02wqx3165t3qpvjxourdhwf2a4wg151z620x2nha3wxgnjcq9mndah28881myj58jpxexxtd9rxms0hr4hjc02j9yf6lxv5ybdjwek75caf7jc78szmya4pnj588qf8s6lt9qjrgjfzyxrn8yx0qvkoszuwu9n29qz3plwelg8ejbm9hdd3xyn4ra4sw0dycq175hzdffp5djirn5wlbnf3i3gjh5qdcsh3r9mlqv55q39n4n5p4d9e6fiyccpbmwh3hcaoptopizlv6xy82kj9a5v3nf00kzqk194kg61kohqv0jbnq028rgscxt09pixc38im60sv9eh9b0o66sh3gev7ddqv4t8c1ue0j9imoyo81590162yvz11twegnj9j1qe9cy38jleimyctw4dz5nco5ohh2pplti4lt28in2gb6se3esi222l1q5ss3wnf4of5v4ctx9wayejs6ffqc3xvf6jrm3keaxlbo9003coab0x3fh2jqzyogfhpipj49skkvyijtv2szneohvcp98xalvgw9a0y9vm3u4g3uj7ptwlq27npqw4noaxyf55vkznqgz8px5w9qodi82kmhbibctv4op7h7w0txwpc6y4ommdg0yio0sb5nqts63x3rpgkklzh5pjt30tpojvchbjc8qfy20y97r1wkqh2e7k9p2oqs4kco74if4fzqnnwo8iy5m3x8fep484opxuqnyu0e8qfuzwo9h497w7bxva4pi66qhim4fuzc4xmaprn53e0xikucbx0zhcad0tv0n8fivmi0u9np6a3dnshl1cax67yw5n1xq0721179elfdwwnx3c5l35kz7xzho3978bs9f16h5j3gvl9sdpmjzqzjclr6np7ud0mheswutdmcz6bwjj0q4fo6dbwwggcyri9ubjs9zw5ou80e345x7ot5w92jhcqhm2dl5ab8nsrw1gpps18hkhyyq7vkweq2hw7wiuxrdoqojh50abw2f7crnqdmjol7i9ze7xsqe7pdg92dcus77dn976ly0nv73rla0ucdqrhxx1ul934n07qzbpk6ieyrak2v1bo0jggh0bgdv2tqyhzuhfnr9al6tijuq9me4gtxap3lnzamzca5scis0hoau9g4akxy2cuf8t3ove36g5d3ftkz2yz5pwa6gydpy3qakkduj1xf905k4 00:05:55.874 16:51:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:05:55.874 16:51:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:05:55.874 16:51:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:55.874 16:51:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:55.874 { 00:05:55.874 "subsystems": [ 00:05:55.874 { 00:05:55.874 "subsystem": "bdev", 00:05:55.874 "config": [ 00:05:55.874 { 00:05:55.874 "params": { 00:05:55.874 "trtype": "pcie", 00:05:55.874 "traddr": "0000:00:10.0", 00:05:55.874 "name": "Nvme0" 00:05:55.874 }, 00:05:55.874 "method": "bdev_nvme_attach_controller" 00:05:55.874 }, 00:05:55.874 { 00:05:55.874 "method": "bdev_wait_for_examine" 00:05:55.874 } 00:05:55.874 ] 00:05:55.874 } 00:05:55.874 ] 00:05:55.874 } 00:05:55.874 [2024-11-20 16:51:14.269073] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:55.874 [2024-11-20 16:51:14.269220] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60003 ] 00:05:56.134 [2024-11-20 16:51:14.420255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.134 [2024-11-20 16:51:14.480922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.134 [2024-11-20 16:51:14.538626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.134  [2024-11-20T16:51:14.939Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:56.403 00:05:56.403 16:51:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:05:56.403 16:51:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:05:56.403 16:51:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:56.403 16:51:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:56.403 { 00:05:56.403 "subsystems": [ 00:05:56.403 { 00:05:56.403 "subsystem": "bdev", 00:05:56.403 "config": [ 00:05:56.403 { 00:05:56.403 "params": { 00:05:56.403 "trtype": "pcie", 00:05:56.403 "traddr": "0000:00:10.0", 00:05:56.403 "name": "Nvme0" 00:05:56.403 }, 00:05:56.403 "method": "bdev_nvme_attach_controller" 00:05:56.403 }, 00:05:56.403 { 00:05:56.403 "method": "bdev_wait_for_examine" 00:05:56.403 } 00:05:56.403 ] 00:05:56.403 } 00:05:56.403 ] 00:05:56.403 } 00:05:56.403 [2024-11-20 16:51:14.910347] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:56.403 [2024-11-20 16:51:14.910438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60022 ] 00:05:56.668 [2024-11-20 16:51:15.058536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.668 [2024-11-20 16:51:15.113629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.668 [2024-11-20 16:51:15.169640] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.929  [2024-11-20T16:51:15.725Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:57.189 00:05:57.189 16:51:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:05:57.190 16:51:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ n07vdt37lcpqprxnr90rurgp957y8rtxdmqxs60ha1mofdvucg89b9dzs8b4xy334s9isvmgem6khi8wfo9jom7yxtmwfn4e89u8n29wwqf4vbt79g8yxpur0ry7cpyasoq12nclv5kg5rcxz6c5jq83vvu0rl3d6eaggx89wti1fsqwl722k0wiooaejkv60jfqu69hp2mhvv2gglf42ger880d5kj9kuoc9q6vzugvw5ak4y2e0sxz739ollb9cjmfnbm7uhb8emkaozpdxsehxcrte1vywuxzepgjcz0isfs6ccj14r7bw4jt5qlcu0qsxmtgxv3tk8um3oaws1lac8p8q2fpwu0htv2miqcseektodhkdkcj0k8gw4h6utw02vhh43p9uxuqfcsezsfjh7c4d384ci3888lpyfbairayv9f77gejmy2emmn9ycutp2cpfaxyvu3n7553oxbn25tb5swd05vrxitl1ux5njm42badm4srco6vbl2jdlkaolqdevm8lmbwfr2ie9smdpwyrtpprdzfen7fqvkbsq2gy7cviv1ylgxg9h17wq5n2h7y1djz9909km3hcivdswxjlic6shz92t2ysskf6upmrsikk6wvnl8eajpqri274gpjie801qf58gwssebx8hrojrtlrzlaylqqwgdhze3jeou44ejbb4x5oi9ljqenpm5hg4ik6pryppo3d13w4hgonu1o1l00k2ayi6z2whmee80y59os0rwz26benidsdlbhjszyliuybo37zwoiq4j2pwimifxgds5pkmcpbvfdytvha2a380eeimvfji5fg8oq5ca58lfqsal28m9eapp61cil881yc6vfqvaaa4g6n6dho5i5uki4lchq1ofcbm1pw7xr9lf2hztcexdmm9ccfas47yimcl5p6uuohg9v9rv9shn4stzfye97iersru4maq0dvpbyn5j9asddlrfhxnof8v1q5jq83ojwfr1kjz93wag7fmcwvmehm5wapq2o4w3cru2frb3hmnrn63bjvz8ofvn2ja0htdq4npcj8ckzjbxsnw5cgsyy6napsozk4qsck74y8ivhyr2g00rwewe2ouydeg96ipgux9hv7arig496zhx4ajefpguv04al1pjyrd3pgkqzfmu2iok52z0krlef5rvzte6ccct1ogfumetwirq0dcrnsrii51w6bw81h76u149y3w088kmnk4w5qtzqq2cxjmi2wfhfkooahpgagjomtrp8py2u62wl4olbx6ouhpeuy6tds9ey6dxrutspxk1ogc5b7mzlg49gdlhb94u8todfeszymz3x84qdsdzvsfyw4d3pnvypo4egpbftdkjv1sg7sy7x70n3gi5zvidybyl68exd3igc6oa0c0ehytogp9209ywzlikk37ho0w4oxv6nuxgju580a1zgd5k2qrhehq041odf2nbe1ndv3erg5zz0kru0hzfn9ba5d29fs0aq3dvk0shmw8hz4wzidrvgzdijm6m969q63ggxa0n38c397tei134t7lsk4zy40s7l897jlvxlrdxdq0w469jn1mw2dghzdzdv6xaesfuhnzhhxp6ne2x9wcy70xdw36osfzozgbb4lht6vaub2t6tjwtwrxgi6cjo1mia6y6xym428d45e1l86y28x9mhbrjuflifrei87un12p8qvxmrkzs8k9awfixcvmszqy8msxe8er9zsviext0bn5ev2tahefxvblclc6me75kq8ui1404xl8ycanf3y7s2uzdutn2qoes0ditjwpxmoas3rcmsw7bkrd75g6secg925d91bjds13emoh1uqolocm0j7bypv6h3jvlulp076k92893h93ip0rkgjxqze8m3nox1imaarliv28hwbkevjuzxazzbmw57hjot3nqfwz9ial6zjdxg6twf3v49cr43dwasma857epddcu7ns7ikrc4lwxhgb0tg6qalha47s15x6kcdm18534hyk1hd0sj7ofgeb9j038a4pxfhxfodfq438rjihhugz4nr4vlif0hm9zaruzu32m3355muz4xpdddi0254jonabtt2k2oqnko08uddxjho6j6nxo6bm8675pvyx7mdav2obd1ymm5heyceefc0wp78txzpypbg4hh6hgzuo5nxrkgp2bygtr337ixy0bi78qpdnzyswqxmxvn8v4d727tkgffeyjkyaz105of6nxsp1vcojqugix86ri7yjim3tnzlm8xvvyy74klwfidq24sfak8veyxme1kg0jq2m8pj69x4l977rjr5hz28pxe4hdibb3wd1sydzxivihgq3niyr7e9etk7ntg10be9yxwae4j2losvd84akzrgojm300f2259my4fieaud7wji92kcr0cy8m2ntabxxje9urpv4e4t3tpn7zrhv2ytt4bym6dzc5v5khvjom7iifp2gdb28biwamzcr96xoqcgjnsmlnfwwio9t7prgh4000rptx93a96jyb1jncz3rytmr9vr46tepb3nr2kugu1fiiyzh8soin1gkwbk6e4upk8ggx755n9kby89qwajawmkezz39aw16etjn6jho7itnivai3nyvndwabrty6z9fqwbrri0ivthukcnd7p9jemt37is7uvare630dxcox7n4o7wnbyaylt57pvdj2ljezjcj53vpm234f84fri0l4a3k1hx2h34q2hxszujmw2ltvt3ksel7781rf17u2rprm9x8iuswvu4muy8juwwvqs9dodnmr9g1dl6ekusfaqyht8uyvg8i3tsxnwvay4rhsvtv97okhdifpgzrpvxj77y2g2rss6t4s02wqx3165t3qpvjxourdhwf2a4wg151z620x2nha3wxgnjcq9mndah28881myj58jpxexxtd9rxms0hr4hjc02j9yf6lxv5ybdjwek75caf7jc78szmya4pnj588qf8s6lt9qjrgjfzyxrn8yx0qvkoszuwu9n29qz3plwelg8ejbm9hdd3xyn4ra4sw0dycq175hzdffp5djirn5wlbnf3i3gjh5qdcsh3r9mlqv55q39n4n5p4d9e6fiyccpbmwh3hcaoptopizlv6xy82kj9a5v3nf00kzqk194kg61kohqv0jbnq028rgscxt09pixc38im60sv9eh9b0o66sh3gev7ddqv4t8c1ue0j9imoyo81590162yvz11twegnj9j1qe9cy38jleimyctw4dz5nco5ohh2pplti4lt28in2gb6se3esi222l1q5ss3wnf4of5v4ctx9wayejs6ffqc3xvf6jrm3keaxlbo9003coab0x3fh2jqzyogfhpipj49skkvyijtv2szneohvcp98xalvgw9a0y9vm3u4g3uj7ptwlq27npqw4noaxyf55vkznqgz8px5w9qodi82kmhbibctv4op7h7w0txwpc6y4ommdg0yio0sb5nqts63x3rpgkklzh5pjt30tpojvchbjc8qfy20y97r1wkqh2e7k9p2oqs4kco74if4fzqnnwo8iy5m3x8fep484opxuqnyu0e8qfuzwo9h497w7bxva4pi66qhim4fuzc4xmaprn53e0xikucbx0zhcad0tv0n8fivmi0u9np6a3dnshl1cax67yw5n1xq0721179elfdwwnx3c5l35kz7xzho3978bs9f16h5j3gvl9sdpmjzqzjclr6np7ud0mheswutdmcz6bwjj0q4fo6dbwwggcyri9ubjs9zw5ou80e345x7ot5w92jhcqhm2dl5ab8nsrw1gpps18hkhyyq7vkweq2hw7wiuxrdoqojh50abw2f7crnqdmjol7i9ze7xsqe7pdg92dcus77dn976ly0nv73rla0ucdqrhxx1ul934n07qzbpk6ieyrak2v1bo0jggh0bgdv2tqyhzuhfnr9al6tijuq9me4gtxap3lnzamzca5scis0hoau9g4akxy2cuf8t3ove36g5d3ftkz2yz5pwa6gydpy3qakkduj1xf905k4 == \n\0\7\v\d\t\3\7\l\c\p\q\p\r\x\n\r\9\0\r\u\r\g\p\9\5\7\y\8\r\t\x\d\m\q\x\s\6\0\h\a\1\m\o\f\d\v\u\c\g\8\9\b\9\d\z\s\8\b\4\x\y\3\3\4\s\9\i\s\v\m\g\e\m\6\k\h\i\8\w\f\o\9\j\o\m\7\y\x\t\m\w\f\n\4\e\8\9\u\8\n\2\9\w\w\q\f\4\v\b\t\7\9\g\8\y\x\p\u\r\0\r\y\7\c\p\y\a\s\o\q\1\2\n\c\l\v\5\k\g\5\r\c\x\z\6\c\5\j\q\8\3\v\v\u\0\r\l\3\d\6\e\a\g\g\x\8\9\w\t\i\1\f\s\q\w\l\7\2\2\k\0\w\i\o\o\a\e\j\k\v\6\0\j\f\q\u\6\9\h\p\2\m\h\v\v\2\g\g\l\f\4\2\g\e\r\8\8\0\d\5\k\j\9\k\u\o\c\9\q\6\v\z\u\g\v\w\5\a\k\4\y\2\e\0\s\x\z\7\3\9\o\l\l\b\9\c\j\m\f\n\b\m\7\u\h\b\8\e\m\k\a\o\z\p\d\x\s\e\h\x\c\r\t\e\1\v\y\w\u\x\z\e\p\g\j\c\z\0\i\s\f\s\6\c\c\j\1\4\r\7\b\w\4\j\t\5\q\l\c\u\0\q\s\x\m\t\g\x\v\3\t\k\8\u\m\3\o\a\w\s\1\l\a\c\8\p\8\q\2\f\p\w\u\0\h\t\v\2\m\i\q\c\s\e\e\k\t\o\d\h\k\d\k\c\j\0\k\8\g\w\4\h\6\u\t\w\0\2\v\h\h\4\3\p\9\u\x\u\q\f\c\s\e\z\s\f\j\h\7\c\4\d\3\8\4\c\i\3\8\8\8\l\p\y\f\b\a\i\r\a\y\v\9\f\7\7\g\e\j\m\y\2\e\m\m\n\9\y\c\u\t\p\2\c\p\f\a\x\y\v\u\3\n\7\5\5\3\o\x\b\n\2\5\t\b\5\s\w\d\0\5\v\r\x\i\t\l\1\u\x\5\n\j\m\4\2\b\a\d\m\4\s\r\c\o\6\v\b\l\2\j\d\l\k\a\o\l\q\d\e\v\m\8\l\m\b\w\f\r\2\i\e\9\s\m\d\p\w\y\r\t\p\p\r\d\z\f\e\n\7\f\q\v\k\b\s\q\2\g\y\7\c\v\i\v\1\y\l\g\x\g\9\h\1\7\w\q\5\n\2\h\7\y\1\d\j\z\9\9\0\9\k\m\3\h\c\i\v\d\s\w\x\j\l\i\c\6\s\h\z\9\2\t\2\y\s\s\k\f\6\u\p\m\r\s\i\k\k\6\w\v\n\l\8\e\a\j\p\q\r\i\2\7\4\g\p\j\i\e\8\0\1\q\f\5\8\g\w\s\s\e\b\x\8\h\r\o\j\r\t\l\r\z\l\a\y\l\q\q\w\g\d\h\z\e\3\j\e\o\u\4\4\e\j\b\b\4\x\5\o\i\9\l\j\q\e\n\p\m\5\h\g\4\i\k\6\p\r\y\p\p\o\3\d\1\3\w\4\h\g\o\n\u\1\o\1\l\0\0\k\2\a\y\i\6\z\2\w\h\m\e\e\8\0\y\5\9\o\s\0\r\w\z\2\6\b\e\n\i\d\s\d\l\b\h\j\s\z\y\l\i\u\y\b\o\3\7\z\w\o\i\q\4\j\2\p\w\i\m\i\f\x\g\d\s\5\p\k\m\c\p\b\v\f\d\y\t\v\h\a\2\a\3\8\0\e\e\i\m\v\f\j\i\5\f\g\8\o\q\5\c\a\5\8\l\f\q\s\a\l\2\8\m\9\e\a\p\p\6\1\c\i\l\8\8\1\y\c\6\v\f\q\v\a\a\a\4\g\6\n\6\d\h\o\5\i\5\u\k\i\4\l\c\h\q\1\o\f\c\b\m\1\p\w\7\x\r\9\l\f\2\h\z\t\c\e\x\d\m\m\9\c\c\f\a\s\4\7\y\i\m\c\l\5\p\6\u\u\o\h\g\9\v\9\r\v\9\s\h\n\4\s\t\z\f\y\e\9\7\i\e\r\s\r\u\4\m\a\q\0\d\v\p\b\y\n\5\j\9\a\s\d\d\l\r\f\h\x\n\o\f\8\v\1\q\5\j\q\8\3\o\j\w\f\r\1\k\j\z\9\3\w\a\g\7\f\m\c\w\v\m\e\h\m\5\w\a\p\q\2\o\4\w\3\c\r\u\2\f\r\b\3\h\m\n\r\n\6\3\b\j\v\z\8\o\f\v\n\2\j\a\0\h\t\d\q\4\n\p\c\j\8\c\k\z\j\b\x\s\n\w\5\c\g\s\y\y\6\n\a\p\s\o\z\k\4\q\s\c\k\7\4\y\8\i\v\h\y\r\2\g\0\0\r\w\e\w\e\2\o\u\y\d\e\g\9\6\i\p\g\u\x\9\h\v\7\a\r\i\g\4\9\6\z\h\x\4\a\j\e\f\p\g\u\v\0\4\a\l\1\p\j\y\r\d\3\p\g\k\q\z\f\m\u\2\i\o\k\5\2\z\0\k\r\l\e\f\5\r\v\z\t\e\6\c\c\c\t\1\o\g\f\u\m\e\t\w\i\r\q\0\d\c\r\n\s\r\i\i\5\1\w\6\b\w\8\1\h\7\6\u\1\4\9\y\3\w\0\8\8\k\m\n\k\4\w\5\q\t\z\q\q\2\c\x\j\m\i\2\w\f\h\f\k\o\o\a\h\p\g\a\g\j\o\m\t\r\p\8\p\y\2\u\6\2\w\l\4\o\l\b\x\6\o\u\h\p\e\u\y\6\t\d\s\9\e\y\6\d\x\r\u\t\s\p\x\k\1\o\g\c\5\b\7\m\z\l\g\4\9\g\d\l\h\b\9\4\u\8\t\o\d\f\e\s\z\y\m\z\3\x\8\4\q\d\s\d\z\v\s\f\y\w\4\d\3\p\n\v\y\p\o\4\e\g\p\b\f\t\d\k\j\v\1\s\g\7\s\y\7\x\7\0\n\3\g\i\5\z\v\i\d\y\b\y\l\6\8\e\x\d\3\i\g\c\6\o\a\0\c\0\e\h\y\t\o\g\p\9\2\0\9\y\w\z\l\i\k\k\3\7\h\o\0\w\4\o\x\v\6\n\u\x\g\j\u\5\8\0\a\1\z\g\d\5\k\2\q\r\h\e\h\q\0\4\1\o\d\f\2\n\b\e\1\n\d\v\3\e\r\g\5\z\z\0\k\r\u\0\h\z\f\n\9\b\a\5\d\2\9\f\s\0\a\q\3\d\v\k\0\s\h\m\w\8\h\z\4\w\z\i\d\r\v\g\z\d\i\j\m\6\m\9\6\9\q\6\3\g\g\x\a\0\n\3\8\c\3\9\7\t\e\i\1\3\4\t\7\l\s\k\4\z\y\4\0\s\7\l\8\9\7\j\l\v\x\l\r\d\x\d\q\0\w\4\6\9\j\n\1\m\w\2\d\g\h\z\d\z\d\v\6\x\a\e\s\f\u\h\n\z\h\h\x\p\6\n\e\2\x\9\w\c\y\7\0\x\d\w\3\6\o\s\f\z\o\z\g\b\b\4\l\h\t\6\v\a\u\b\2\t\6\t\j\w\t\w\r\x\g\i\6\c\j\o\1\m\i\a\6\y\6\x\y\m\4\2\8\d\4\5\e\1\l\8\6\y\2\8\x\9\m\h\b\r\j\u\f\l\i\f\r\e\i\8\7\u\n\1\2\p\8\q\v\x\m\r\k\z\s\8\k\9\a\w\f\i\x\c\v\m\s\z\q\y\8\m\s\x\e\8\e\r\9\z\s\v\i\e\x\t\0\b\n\5\e\v\2\t\a\h\e\f\x\v\b\l\c\l\c\6\m\e\7\5\k\q\8\u\i\1\4\0\4\x\l\8\y\c\a\n\f\3\y\7\s\2\u\z\d\u\t\n\2\q\o\e\s\0\d\i\t\j\w\p\x\m\o\a\s\3\r\c\m\s\w\7\b\k\r\d\7\5\g\6\s\e\c\g\9\2\5\d\9\1\b\j\d\s\1\3\e\m\o\h\1\u\q\o\l\o\c\m\0\j\7\b\y\p\v\6\h\3\j\v\l\u\l\p\0\7\6\k\9\2\8\9\3\h\9\3\i\p\0\r\k\g\j\x\q\z\e\8\m\3\n\o\x\1\i\m\a\a\r\l\i\v\2\8\h\w\b\k\e\v\j\u\z\x\a\z\z\b\m\w\5\7\h\j\o\t\3\n\q\f\w\z\9\i\a\l\6\z\j\d\x\g\6\t\w\f\3\v\4\9\c\r\4\3\d\w\a\s\m\a\8\5\7\e\p\d\d\c\u\7\n\s\7\i\k\r\c\4\l\w\x\h\g\b\0\t\g\6\q\a\l\h\a\4\7\s\1\5\x\6\k\c\d\m\1\8\5\3\4\h\y\k\1\h\d\0\s\j\7\o\f\g\e\b\9\j\0\3\8\a\4\p\x\f\h\x\f\o\d\f\q\4\3\8\r\j\i\h\h\u\g\z\4\n\r\4\v\l\i\f\0\h\m\9\z\a\r\u\z\u\3\2\m\3\3\5\5\m\u\z\4\x\p\d\d\d\i\0\2\5\4\j\o\n\a\b\t\t\2\k\2\o\q\n\k\o\0\8\u\d\d\x\j\h\o\6\j\6\n\x\o\6\b\m\8\6\7\5\p\v\y\x\7\m\d\a\v\2\o\b\d\1\y\m\m\5\h\e\y\c\e\e\f\c\0\w\p\7\8\t\x\z\p\y\p\b\g\4\h\h\6\h\g\z\u\o\5\n\x\r\k\g\p\2\b\y\g\t\r\3\3\7\i\x\y\0\b\i\7\8\q\p\d\n\z\y\s\w\q\x\m\x\v\n\8\v\4\d\7\2\7\t\k\g\f\f\e\y\j\k\y\a\z\1\0\5\o\f\6\n\x\s\p\1\v\c\o\j\q\u\g\i\x\8\6\r\i\7\y\j\i\m\3\t\n\z\l\m\8\x\v\v\y\y\7\4\k\l\w\f\i\d\q\2\4\s\f\a\k\8\v\e\y\x\m\e\1\k\g\0\j\q\2\m\8\p\j\6\9\x\4\l\9\7\7\r\j\r\5\h\z\2\8\p\x\e\4\h\d\i\b\b\3\w\d\1\s\y\d\z\x\i\v\i\h\g\q\3\n\i\y\r\7\e\9\e\t\k\7\n\t\g\1\0\b\e\9\y\x\w\a\e\4\j\2\l\o\s\v\d\8\4\a\k\z\r\g\o\j\m\3\0\0\f\2\2\5\9\m\y\4\f\i\e\a\u\d\7\w\j\i\9\2\k\c\r\0\c\y\8\m\2\n\t\a\b\x\x\j\e\9\u\r\p\v\4\e\4\t\3\t\p\n\7\z\r\h\v\2\y\t\t\4\b\y\m\6\d\z\c\5\v\5\k\h\v\j\o\m\7\i\i\f\p\2\g\d\b\2\8\b\i\w\a\m\z\c\r\9\6\x\o\q\c\g\j\n\s\m\l\n\f\w\w\i\o\9\t\7\p\r\g\h\4\0\0\0\r\p\t\x\9\3\a\9\6\j\y\b\1\j\n\c\z\3\r\y\t\m\r\9\v\r\4\6\t\e\p\b\3\n\r\2\k\u\g\u\1\f\i\i\y\z\h\8\s\o\i\n\1\g\k\w\b\k\6\e\4\u\p\k\8\g\g\x\7\5\5\n\9\k\b\y\8\9\q\w\a\j\a\w\m\k\e\z\z\3\9\a\w\1\6\e\t\j\n\6\j\h\o\7\i\t\n\i\v\a\i\3\n\y\v\n\d\w\a\b\r\t\y\6\z\9\f\q\w\b\r\r\i\0\i\v\t\h\u\k\c\n\d\7\p\9\j\e\m\t\3\7\i\s\7\u\v\a\r\e\6\3\0\d\x\c\o\x\7\n\4\o\7\w\n\b\y\a\y\l\t\5\7\p\v\d\j\2\l\j\e\z\j\c\j\5\3\v\p\m\2\3\4\f\8\4\f\r\i\0\l\4\a\3\k\1\h\x\2\h\3\4\q\2\h\x\s\z\u\j\m\w\2\l\t\v\t\3\k\s\e\l\7\7\8\1\r\f\1\7\u\2\r\p\r\m\9\x\8\i\u\s\w\v\u\4\m\u\y\8\j\u\w\w\v\q\s\9\d\o\d\n\m\r\9\g\1\d\l\6\e\k\u\s\f\a\q\y\h\t\8\u\y\v\g\8\i\3\t\s\x\n\w\v\a\y\4\r\h\s\v\t\v\9\7\o\k\h\d\i\f\p\g\z\r\p\v\x\j\7\7\y\2\g\2\r\s\s\6\t\4\s\0\2\w\q\x\3\1\6\5\t\3\q\p\v\j\x\o\u\r\d\h\w\f\2\a\4\w\g\1\5\1\z\6\2\0\x\2\n\h\a\3\w\x\g\n\j\c\q\9\m\n\d\a\h\2\8\8\8\1\m\y\j\5\8\j\p\x\e\x\x\t\d\9\r\x\m\s\0\h\r\4\h\j\c\0\2\j\9\y\f\6\l\x\v\5\y\b\d\j\w\e\k\7\5\c\a\f\7\j\c\7\8\s\z\m\y\a\4\p\n\j\5\8\8\q\f\8\s\6\l\t\9\q\j\r\g\j\f\z\y\x\r\n\8\y\x\0\q\v\k\o\s\z\u\w\u\9\n\2\9\q\z\3\p\l\w\e\l\g\8\e\j\b\m\9\h\d\d\3\x\y\n\4\r\a\4\s\w\0\d\y\c\q\1\7\5\h\z\d\f\f\p\5\d\j\i\r\n\5\w\l\b\n\f\3\i\3\g\j\h\5\q\d\c\s\h\3\r\9\m\l\q\v\5\5\q\3\9\n\4\n\5\p\4\d\9\e\6\f\i\y\c\c\p\b\m\w\h\3\h\c\a\o\p\t\o\p\i\z\l\v\6\x\y\8\2\k\j\9\a\5\v\3\n\f\0\0\k\z\q\k\1\9\4\k\g\6\1\k\o\h\q\v\0\j\b\n\q\0\2\8\r\g\s\c\x\t\0\9\p\i\x\c\3\8\i\m\6\0\s\v\9\e\h\9\b\0\o\6\6\s\h\3\g\e\v\7\d\d\q\v\4\t\8\c\1\u\e\0\j\9\i\m\o\y\o\8\1\5\9\0\1\6\2\y\v\z\1\1\t\w\e\g\n\j\9\j\1\q\e\9\c\y\3\8\j\l\e\i\m\y\c\t\w\4\d\z\5\n\c\o\5\o\h\h\2\p\p\l\t\i\4\l\t\2\8\i\n\2\g\b\6\s\e\3\e\s\i\2\2\2\l\1\q\5\s\s\3\w\n\f\4\o\f\5\v\4\c\t\x\9\w\a\y\e\j\s\6\f\f\q\c\3\x\v\f\6\j\r\m\3\k\e\a\x\l\b\o\9\0\0\3\c\o\a\b\0\x\3\f\h\2\j\q\z\y\o\g\f\h\p\i\p\j\4\9\s\k\k\v\y\i\j\t\v\2\s\z\n\e\o\h\v\c\p\9\8\x\a\l\v\g\w\9\a\0\y\9\v\m\3\u\4\g\3\u\j\7\p\t\w\l\q\2\7\n\p\q\w\4\n\o\a\x\y\f\5\5\v\k\z\n\q\g\z\8\p\x\5\w\9\q\o\d\i\8\2\k\m\h\b\i\b\c\t\v\4\o\p\7\h\7\w\0\t\x\w\p\c\6\y\4\o\m\m\d\g\0\y\i\o\0\s\b\5\n\q\t\s\6\3\x\3\r\p\g\k\k\l\z\h\5\p\j\t\3\0\t\p\o\j\v\c\h\b\j\c\8\q\f\y\2\0\y\9\7\r\1\w\k\q\h\2\e\7\k\9\p\2\o\q\s\4\k\c\o\7\4\i\f\4\f\z\q\n\n\w\o\8\i\y\5\m\3\x\8\f\e\p\4\8\4\o\p\x\u\q\n\y\u\0\e\8\q\f\u\z\w\o\9\h\4\9\7\w\7\b\x\v\a\4\p\i\6\6\q\h\i\m\4\f\u\z\c\4\x\m\a\p\r\n\5\3\e\0\x\i\k\u\c\b\x\0\z\h\c\a\d\0\t\v\0\n\8\f\i\v\m\i\0\u\9\n\p\6\a\3\d\n\s\h\l\1\c\a\x\6\7\y\w\5\n\1\x\q\0\7\2\1\1\7\9\e\l\f\d\w\w\n\x\3\c\5\l\3\5\k\z\7\x\z\h\o\3\9\7\8\b\s\9\f\1\6\h\5\j\3\g\v\l\9\s\d\p\m\j\z\q\z\j\c\l\r\6\n\p\7\u\d\0\m\h\e\s\w\u\t\d\m\c\z\6\b\w\j\j\0\q\4\f\o\6\d\b\w\w\g\g\c\y\r\i\9\u\b\j\s\9\z\w\5\o\u\8\0\e\3\4\5\x\7\o\t\5\w\9\2\j\h\c\q\h\m\2\d\l\5\a\b\8\n\s\r\w\1\g\p\p\s\1\8\h\k\h\y\y\q\7\v\k\w\e\q\2\h\w\7\w\i\u\x\r\d\o\q\o\j\h\5\0\a\b\w\2\f\7\c\r\n\q\d\m\j\o\l\7\i\9\z\e\7\x\s\q\e\7\p\d\g\9\2\d\c\u\s\7\7\d\n\9\7\6\l\y\0\n\v\7\3\r\l\a\0\u\c\d\q\r\h\x\x\1\u\l\9\3\4\n\0\7\q\z\b\p\k\6\i\e\y\r\a\k\2\v\1\b\o\0\j\g\g\h\0\b\g\d\v\2\t\q\y\h\z\u\h\f\n\r\9\a\l\6\t\i\j\u\q\9\m\e\4\g\t\x\a\p\3\l\n\z\a\m\z\c\a\5\s\c\i\s\0\h\o\a\u\9\g\4\a\k\x\y\2\c\u\f\8\t\3\o\v\e\3\6\g\5\d\3\f\t\k\z\2\y\z\5\p\w\a\6\g\y\d\p\y\3\q\a\k\k\d\u\j\1\x\f\9\0\5\k\4 ]] 00:05:57.190 ************************************ 00:05:57.190 END TEST dd_rw_offset 00:05:57.190 00:05:57.190 real 0m1.337s 00:05:57.190 user 0m0.891s 00:05:57.190 sys 0m0.635s 00:05:57.190 16:51:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.190 16:51:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:57.190 ************************************ 00:05:57.190 16:51:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:05:57.190 16:51:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:05:57.190 16:51:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:57.190 16:51:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:57.190 16:51:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:05:57.190 16:51:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:57.190 16:51:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:05:57.190 16:51:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:57.190 16:51:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:05:57.190 16:51:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:57.190 16:51:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:57.190 { 00:05:57.190 "subsystems": [ 00:05:57.190 { 00:05:57.190 "subsystem": "bdev", 00:05:57.190 "config": [ 00:05:57.190 { 00:05:57.190 "params": { 00:05:57.190 "trtype": "pcie", 00:05:57.190 "traddr": "0000:00:10.0", 00:05:57.190 "name": "Nvme0" 00:05:57.190 }, 00:05:57.190 "method": "bdev_nvme_attach_controller" 00:05:57.190 }, 00:05:57.190 { 00:05:57.190 "method": "bdev_wait_for_examine" 00:05:57.190 } 00:05:57.190 ] 00:05:57.190 } 00:05:57.190 ] 00:05:57.190 } 00:05:57.190 [2024-11-20 16:51:15.580414] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:57.190 [2024-11-20 16:51:15.580532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60052 ] 00:05:57.450 [2024-11-20 16:51:15.723935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.450 [2024-11-20 16:51:15.781706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.450 [2024-11-20 16:51:15.837473] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.450  [2024-11-20T16:51:16.246Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:57.710 00:05:57.710 16:51:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:57.710 ************************************ 00:05:57.710 END TEST spdk_dd_basic_rw 00:05:57.710 ************************************ 00:05:57.710 00:05:57.710 real 0m17.981s 00:05:57.710 user 0m12.712s 00:05:57.710 sys 0m7.009s 00:05:57.710 16:51:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.710 16:51:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:57.710 16:51:16 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:05:57.710 16:51:16 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.710 16:51:16 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.710 16:51:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:57.710 ************************************ 00:05:57.710 START TEST spdk_dd_posix 00:05:57.710 ************************************ 00:05:57.710 16:51:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:05:57.969 * Looking for test storage... 00:05:57.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:57.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.969 --rc genhtml_branch_coverage=1 00:05:57.969 --rc genhtml_function_coverage=1 00:05:57.969 --rc genhtml_legend=1 00:05:57.969 --rc geninfo_all_blocks=1 00:05:57.969 --rc geninfo_unexecuted_blocks=1 00:05:57.969 00:05:57.969 ' 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:57.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.969 --rc genhtml_branch_coverage=1 00:05:57.969 --rc genhtml_function_coverage=1 00:05:57.969 --rc genhtml_legend=1 00:05:57.969 --rc geninfo_all_blocks=1 00:05:57.969 --rc geninfo_unexecuted_blocks=1 00:05:57.969 00:05:57.969 ' 00:05:57.969 16:51:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:57.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.970 --rc genhtml_branch_coverage=1 00:05:57.970 --rc genhtml_function_coverage=1 00:05:57.970 --rc genhtml_legend=1 00:05:57.970 --rc geninfo_all_blocks=1 00:05:57.970 --rc geninfo_unexecuted_blocks=1 00:05:57.970 00:05:57.970 ' 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:57.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.970 --rc genhtml_branch_coverage=1 00:05:57.970 --rc genhtml_function_coverage=1 00:05:57.970 --rc genhtml_legend=1 00:05:57.970 --rc geninfo_all_blocks=1 00:05:57.970 --rc geninfo_unexecuted_blocks=1 00:05:57.970 00:05:57.970 ' 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:05:57.970 * First test run, liburing in use 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:57.970 ************************************ 00:05:57.970 START TEST dd_flag_append 00:05:57.970 ************************************ 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=a2o1ysb3bf2p9jgplto5quwl5qwnq6bn 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=771m77xoj7pgwiti7dbfmg0cx9zu4frf 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s a2o1ysb3bf2p9jgplto5quwl5qwnq6bn 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 771m77xoj7pgwiti7dbfmg0cx9zu4frf 00:05:57.970 16:51:16 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:05:57.970 [2024-11-20 16:51:16.463488] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:57.970 [2024-11-20 16:51:16.463626] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60118 ] 00:05:58.228 [2024-11-20 16:51:16.612586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.228 [2024-11-20 16:51:16.676073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.228 [2024-11-20 16:51:16.731473] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.486  [2024-11-20T16:51:17.022Z] Copying: 32/32 [B] (average 31 kBps) 00:05:58.486 00:05:58.486 16:51:16 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 771m77xoj7pgwiti7dbfmg0cx9zu4frfa2o1ysb3bf2p9jgplto5quwl5qwnq6bn == \7\7\1\m\7\7\x\o\j\7\p\g\w\i\t\i\7\d\b\f\m\g\0\c\x\9\z\u\4\f\r\f\a\2\o\1\y\s\b\3\b\f\2\p\9\j\g\p\l\t\o\5\q\u\w\l\5\q\w\n\q\6\b\n ]] 00:05:58.486 00:05:58.486 real 0m0.566s 00:05:58.486 user 0m0.297s 00:05:58.486 sys 0m0.295s 00:05:58.486 ************************************ 00:05:58.486 END TEST dd_flag_append 00:05:58.486 ************************************ 00:05:58.486 16:51:16 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.486 16:51:16 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:58.486 16:51:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:05:58.486 16:51:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.486 16:51:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.486 16:51:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:58.486 ************************************ 00:05:58.486 START TEST dd_flag_directory 00:05:58.486 ************************************ 00:05:58.486 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:05:58.487 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:58.487 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:05:58.487 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:58.487 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:58.487 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.487 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:58.487 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.487 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:58.487 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.487 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:58.487 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:58.487 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:58.745 [2024-11-20 16:51:17.064590] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:58.745 [2024-11-20 16:51:17.064701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60152 ] 00:05:58.745 [2024-11-20 16:51:17.212597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.005 [2024-11-20 16:51:17.282310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.005 [2024-11-20 16:51:17.341525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.005 [2024-11-20 16:51:17.383281] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:59.005 [2024-11-20 16:51:17.383360] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:59.005 [2024-11-20 16:51:17.383395] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:59.005 [2024-11-20 16:51:17.509420] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:59.265 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:05:59.265 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:59.265 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:05:59.265 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:05:59.265 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:05:59.265 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:59.265 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:59.265 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:05:59.265 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:59.265 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:59.265 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.265 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:59.265 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.265 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:59.265 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.265 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:59.265 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:59.265 16:51:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:59.265 [2024-11-20 16:51:17.653931] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:59.265 [2024-11-20 16:51:17.654075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60163 ] 00:05:59.524 [2024-11-20 16:51:17.804608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.524 [2024-11-20 16:51:17.867337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.524 [2024-11-20 16:51:17.922787] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.524 [2024-11-20 16:51:17.961348] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:59.524 [2024-11-20 16:51:17.961411] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:59.524 [2024-11-20 16:51:17.961447] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:59.783 [2024-11-20 16:51:18.080866] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:59.783 00:05:59.783 real 0m1.133s 00:05:59.783 user 0m0.635s 00:05:59.783 sys 0m0.287s 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.783 ************************************ 00:05:59.783 END TEST dd_flag_directory 00:05:59.783 ************************************ 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:59.783 ************************************ 00:05:59.783 START TEST dd_flag_nofollow 00:05:59.783 ************************************ 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:59.783 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:59.783 [2024-11-20 16:51:18.270357] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:05:59.783 [2024-11-20 16:51:18.270458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60191 ] 00:06:00.043 [2024-11-20 16:51:18.417425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.043 [2024-11-20 16:51:18.470451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.043 [2024-11-20 16:51:18.529674] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.043 [2024-11-20 16:51:18.567941] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:00.043 [2024-11-20 16:51:18.568000] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:00.043 [2024-11-20 16:51:18.568021] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:00.301 [2024-11-20 16:51:18.689613] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:00.301 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:06:00.301 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:00.301 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:06:00.301 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:06:00.301 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:06:00.301 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:00.301 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:00.301 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:06:00.301 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:00.301 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:00.301 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.301 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:00.301 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.301 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:00.301 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.301 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:00.301 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:00.301 16:51:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:00.301 [2024-11-20 16:51:18.828736] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:00.301 [2024-11-20 16:51:18.828865] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60205 ] 00:06:00.560 [2024-11-20 16:51:18.973947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.561 [2024-11-20 16:51:19.035991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.561 [2024-11-20 16:51:19.092321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.830 [2024-11-20 16:51:19.130278] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:00.830 [2024-11-20 16:51:19.130351] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:00.830 [2024-11-20 16:51:19.130387] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:00.830 [2024-11-20 16:51:19.253818] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:00.830 16:51:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:06:00.830 16:51:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:00.830 16:51:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:06:00.830 16:51:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:06:00.830 16:51:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:06:00.830 16:51:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:00.830 16:51:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:00.830 16:51:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:00.830 16:51:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:00.830 16:51:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:01.090 [2024-11-20 16:51:19.392263] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:01.090 [2024-11-20 16:51:19.392384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60208 ] 00:06:01.090 [2024-11-20 16:51:19.539051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.090 [2024-11-20 16:51:19.591404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.349 [2024-11-20 16:51:19.646599] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.349  [2024-11-20T16:51:20.144Z] Copying: 512/512 [B] (average 500 kBps) 00:06:01.608 00:06:01.608 16:51:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ n4qzmrxb844kma8uhnyhv4nh0r1rwxd1gthwqjq8b2qm7zozed9c95pq2xxbyaxhm0j7epq22rynjqiecu4jhhmu7426ie74h6ob26a5a4er3ljli5na4zkub4if932uqlgz49b2qr6j6k30ae3ve8r3n8pznw1vuko4c10f0vnodg3xdtpjmnmcmm5giys0epu87scnzm4fj6xnpu23ydgo806yybcj88klocdyhz0ytobtwxiq36n6nacf5iuxanue6igev2hjr95hgybdlr0vo1g590v4c6qvme96xi74pbq41lt5kprtbn6zcdaay46nmh7zme8z6mdqzuoh5e7ngg8hsirl1mm450rzvzx682o28wyhrkj2it8zs2pzs2b1sa00gsmnc0vh06kcxu73zf3jdhcd059x3npmxpa4zmuvqaumzbis613smhvmmoekl0vfdtxkang0voeoie1tc8j2url0uf0mjn81anyz7pqcvl04azkxg6jj0god == \n\4\q\z\m\r\x\b\8\4\4\k\m\a\8\u\h\n\y\h\v\4\n\h\0\r\1\r\w\x\d\1\g\t\h\w\q\j\q\8\b\2\q\m\7\z\o\z\e\d\9\c\9\5\p\q\2\x\x\b\y\a\x\h\m\0\j\7\e\p\q\2\2\r\y\n\j\q\i\e\c\u\4\j\h\h\m\u\7\4\2\6\i\e\7\4\h\6\o\b\2\6\a\5\a\4\e\r\3\l\j\l\i\5\n\a\4\z\k\u\b\4\i\f\9\3\2\u\q\l\g\z\4\9\b\2\q\r\6\j\6\k\3\0\a\e\3\v\e\8\r\3\n\8\p\z\n\w\1\v\u\k\o\4\c\1\0\f\0\v\n\o\d\g\3\x\d\t\p\j\m\n\m\c\m\m\5\g\i\y\s\0\e\p\u\8\7\s\c\n\z\m\4\f\j\6\x\n\p\u\2\3\y\d\g\o\8\0\6\y\y\b\c\j\8\8\k\l\o\c\d\y\h\z\0\y\t\o\b\t\w\x\i\q\3\6\n\6\n\a\c\f\5\i\u\x\a\n\u\e\6\i\g\e\v\2\h\j\r\9\5\h\g\y\b\d\l\r\0\v\o\1\g\5\9\0\v\4\c\6\q\v\m\e\9\6\x\i\7\4\p\b\q\4\1\l\t\5\k\p\r\t\b\n\6\z\c\d\a\a\y\4\6\n\m\h\7\z\m\e\8\z\6\m\d\q\z\u\o\h\5\e\7\n\g\g\8\h\s\i\r\l\1\m\m\4\5\0\r\z\v\z\x\6\8\2\o\2\8\w\y\h\r\k\j\2\i\t\8\z\s\2\p\z\s\2\b\1\s\a\0\0\g\s\m\n\c\0\v\h\0\6\k\c\x\u\7\3\z\f\3\j\d\h\c\d\0\5\9\x\3\n\p\m\x\p\a\4\z\m\u\v\q\a\u\m\z\b\i\s\6\1\3\s\m\h\v\m\m\o\e\k\l\0\v\f\d\t\x\k\a\n\g\0\v\o\e\o\i\e\1\t\c\8\j\2\u\r\l\0\u\f\0\m\j\n\8\1\a\n\y\z\7\p\q\c\v\l\0\4\a\z\k\x\g\6\j\j\0\g\o\d ]] 00:06:01.608 00:06:01.608 real 0m1.698s 00:06:01.608 user 0m0.937s 00:06:01.608 sys 0m0.584s 00:06:01.608 ************************************ 00:06:01.608 END TEST dd_flag_nofollow 00:06:01.608 ************************************ 00:06:01.608 16:51:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.608 16:51:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:01.608 16:51:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:01.608 16:51:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.608 16:51:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.608 16:51:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:01.608 ************************************ 00:06:01.608 START TEST dd_flag_noatime 00:06:01.608 ************************************ 00:06:01.608 16:51:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:06:01.608 16:51:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:01.609 16:51:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:01.609 16:51:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:01.609 16:51:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:01.609 16:51:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:01.609 16:51:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:01.609 16:51:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732121479 00:06:01.609 16:51:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:01.609 16:51:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732121479 00:06:01.609 16:51:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:02.545 16:51:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:02.545 [2024-11-20 16:51:21.025769] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:02.545 [2024-11-20 16:51:21.025916] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60256 ] 00:06:02.805 [2024-11-20 16:51:21.175237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.805 [2024-11-20 16:51:21.239888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.805 [2024-11-20 16:51:21.300161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.064  [2024-11-20T16:51:21.600Z] Copying: 512/512 [B] (average 500 kBps) 00:06:03.064 00:06:03.064 16:51:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:03.064 16:51:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732121479 )) 00:06:03.064 16:51:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:03.064 16:51:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732121479 )) 00:06:03.064 16:51:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:03.323 [2024-11-20 16:51:21.605279] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:03.323 [2024-11-20 16:51:21.605408] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60267 ] 00:06:03.323 [2024-11-20 16:51:21.753970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.323 [2024-11-20 16:51:21.809325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.583 [2024-11-20 16:51:21.865072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.583  [2024-11-20T16:51:22.119Z] Copying: 512/512 [B] (average 500 kBps) 00:06:03.583 00:06:03.583 16:51:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:03.583 16:51:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732121481 )) 00:06:03.583 00:06:03.583 real 0m2.143s 00:06:03.583 user 0m0.619s 00:06:03.583 sys 0m0.587s 00:06:03.583 16:51:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.583 ************************************ 00:06:03.583 END TEST dd_flag_noatime 00:06:03.583 ************************************ 00:06:03.583 16:51:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:03.842 16:51:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:03.842 16:51:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.842 16:51:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.842 16:51:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:03.842 ************************************ 00:06:03.842 START TEST dd_flags_misc 00:06:03.842 ************************************ 00:06:03.842 16:51:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:06:03.842 16:51:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:03.842 16:51:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:03.842 16:51:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:03.842 16:51:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:03.842 16:51:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:03.842 16:51:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:03.842 16:51:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:03.842 16:51:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:03.842 16:51:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:03.842 [2024-11-20 16:51:22.224422] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:03.842 [2024-11-20 16:51:22.224530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60300 ] 00:06:03.842 [2024-11-20 16:51:22.375105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.101 [2024-11-20 16:51:22.429869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.101 [2024-11-20 16:51:22.483759] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:04.101  [2024-11-20T16:51:22.896Z] Copying: 512/512 [B] (average 500 kBps) 00:06:04.360 00:06:04.360 16:51:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ zuxe89apcrrnger8qg4mxkbmg06a3ni404mnnk9309u4zz31qwd9w1sb747armpi9rqf0xq4xiem5fgfh2822o56m1p4401noz7x7zwn7l8i8ninc2fhwan9dij9bureg7ho4pvxscxvkijle4g2crofr86pzhp04c3bqjphcwr4fqpzurbqrdixyhf1buadgea0mjfcsv7tc3n6u32y03myv1imscqmobdkidafxx2s6qe9plj8udjfb4tgzov6qow22s6952djkms287qjyrp66reksrw605rz4ozme9cigwcjad9tw1e7mv3nnl9nl41wwf35g04sixa45u90xrt4ipx0xr1fu8k9yefv4hj5d5ruim1z3ir5654c2vv36zedx3ve3i1i1tpwofsebi8ubcqzav5nbkuwoem85b9mmenxf80p44z98hi1rw62d7xxcpliafbspahc9j4q8q4jdobot43sk602tjujlzo5o33oz9x7fvyyshniqmmo == \z\u\x\e\8\9\a\p\c\r\r\n\g\e\r\8\q\g\4\m\x\k\b\m\g\0\6\a\3\n\i\4\0\4\m\n\n\k\9\3\0\9\u\4\z\z\3\1\q\w\d\9\w\1\s\b\7\4\7\a\r\m\p\i\9\r\q\f\0\x\q\4\x\i\e\m\5\f\g\f\h\2\8\2\2\o\5\6\m\1\p\4\4\0\1\n\o\z\7\x\7\z\w\n\7\l\8\i\8\n\i\n\c\2\f\h\w\a\n\9\d\i\j\9\b\u\r\e\g\7\h\o\4\p\v\x\s\c\x\v\k\i\j\l\e\4\g\2\c\r\o\f\r\8\6\p\z\h\p\0\4\c\3\b\q\j\p\h\c\w\r\4\f\q\p\z\u\r\b\q\r\d\i\x\y\h\f\1\b\u\a\d\g\e\a\0\m\j\f\c\s\v\7\t\c\3\n\6\u\3\2\y\0\3\m\y\v\1\i\m\s\c\q\m\o\b\d\k\i\d\a\f\x\x\2\s\6\q\e\9\p\l\j\8\u\d\j\f\b\4\t\g\z\o\v\6\q\o\w\2\2\s\6\9\5\2\d\j\k\m\s\2\8\7\q\j\y\r\p\6\6\r\e\k\s\r\w\6\0\5\r\z\4\o\z\m\e\9\c\i\g\w\c\j\a\d\9\t\w\1\e\7\m\v\3\n\n\l\9\n\l\4\1\w\w\f\3\5\g\0\4\s\i\x\a\4\5\u\9\0\x\r\t\4\i\p\x\0\x\r\1\f\u\8\k\9\y\e\f\v\4\h\j\5\d\5\r\u\i\m\1\z\3\i\r\5\6\5\4\c\2\v\v\3\6\z\e\d\x\3\v\e\3\i\1\i\1\t\p\w\o\f\s\e\b\i\8\u\b\c\q\z\a\v\5\n\b\k\u\w\o\e\m\8\5\b\9\m\m\e\n\x\f\8\0\p\4\4\z\9\8\h\i\1\r\w\6\2\d\7\x\x\c\p\l\i\a\f\b\s\p\a\h\c\9\j\4\q\8\q\4\j\d\o\b\o\t\4\3\s\k\6\0\2\t\j\u\j\l\z\o\5\o\3\3\o\z\9\x\7\f\v\y\y\s\h\n\i\q\m\m\o ]] 00:06:04.360 16:51:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:04.360 16:51:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:04.360 [2024-11-20 16:51:22.771339] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:04.360 [2024-11-20 16:51:22.771417] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60310 ] 00:06:04.619 [2024-11-20 16:51:22.914683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.619 [2024-11-20 16:51:22.974808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.619 [2024-11-20 16:51:23.029598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:04.619  [2024-11-20T16:51:23.414Z] Copying: 512/512 [B] (average 500 kBps) 00:06:04.878 00:06:04.878 16:51:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ zuxe89apcrrnger8qg4mxkbmg06a3ni404mnnk9309u4zz31qwd9w1sb747armpi9rqf0xq4xiem5fgfh2822o56m1p4401noz7x7zwn7l8i8ninc2fhwan9dij9bureg7ho4pvxscxvkijle4g2crofr86pzhp04c3bqjphcwr4fqpzurbqrdixyhf1buadgea0mjfcsv7tc3n6u32y03myv1imscqmobdkidafxx2s6qe9plj8udjfb4tgzov6qow22s6952djkms287qjyrp66reksrw605rz4ozme9cigwcjad9tw1e7mv3nnl9nl41wwf35g04sixa45u90xrt4ipx0xr1fu8k9yefv4hj5d5ruim1z3ir5654c2vv36zedx3ve3i1i1tpwofsebi8ubcqzav5nbkuwoem85b9mmenxf80p44z98hi1rw62d7xxcpliafbspahc9j4q8q4jdobot43sk602tjujlzo5o33oz9x7fvyyshniqmmo == \z\u\x\e\8\9\a\p\c\r\r\n\g\e\r\8\q\g\4\m\x\k\b\m\g\0\6\a\3\n\i\4\0\4\m\n\n\k\9\3\0\9\u\4\z\z\3\1\q\w\d\9\w\1\s\b\7\4\7\a\r\m\p\i\9\r\q\f\0\x\q\4\x\i\e\m\5\f\g\f\h\2\8\2\2\o\5\6\m\1\p\4\4\0\1\n\o\z\7\x\7\z\w\n\7\l\8\i\8\n\i\n\c\2\f\h\w\a\n\9\d\i\j\9\b\u\r\e\g\7\h\o\4\p\v\x\s\c\x\v\k\i\j\l\e\4\g\2\c\r\o\f\r\8\6\p\z\h\p\0\4\c\3\b\q\j\p\h\c\w\r\4\f\q\p\z\u\r\b\q\r\d\i\x\y\h\f\1\b\u\a\d\g\e\a\0\m\j\f\c\s\v\7\t\c\3\n\6\u\3\2\y\0\3\m\y\v\1\i\m\s\c\q\m\o\b\d\k\i\d\a\f\x\x\2\s\6\q\e\9\p\l\j\8\u\d\j\f\b\4\t\g\z\o\v\6\q\o\w\2\2\s\6\9\5\2\d\j\k\m\s\2\8\7\q\j\y\r\p\6\6\r\e\k\s\r\w\6\0\5\r\z\4\o\z\m\e\9\c\i\g\w\c\j\a\d\9\t\w\1\e\7\m\v\3\n\n\l\9\n\l\4\1\w\w\f\3\5\g\0\4\s\i\x\a\4\5\u\9\0\x\r\t\4\i\p\x\0\x\r\1\f\u\8\k\9\y\e\f\v\4\h\j\5\d\5\r\u\i\m\1\z\3\i\r\5\6\5\4\c\2\v\v\3\6\z\e\d\x\3\v\e\3\i\1\i\1\t\p\w\o\f\s\e\b\i\8\u\b\c\q\z\a\v\5\n\b\k\u\w\o\e\m\8\5\b\9\m\m\e\n\x\f\8\0\p\4\4\z\9\8\h\i\1\r\w\6\2\d\7\x\x\c\p\l\i\a\f\b\s\p\a\h\c\9\j\4\q\8\q\4\j\d\o\b\o\t\4\3\s\k\6\0\2\t\j\u\j\l\z\o\5\o\3\3\o\z\9\x\7\f\v\y\y\s\h\n\i\q\m\m\o ]] 00:06:04.878 16:51:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:04.878 16:51:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:04.878 [2024-11-20 16:51:23.313261] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:04.878 [2024-11-20 16:51:23.313384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60319 ] 00:06:05.139 [2024-11-20 16:51:23.459364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.139 [2024-11-20 16:51:23.513937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.139 [2024-11-20 16:51:23.567357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.139  [2024-11-20T16:51:23.936Z] Copying: 512/512 [B] (average 83 kBps) 00:06:05.400 00:06:05.400 16:51:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ zuxe89apcrrnger8qg4mxkbmg06a3ni404mnnk9309u4zz31qwd9w1sb747armpi9rqf0xq4xiem5fgfh2822o56m1p4401noz7x7zwn7l8i8ninc2fhwan9dij9bureg7ho4pvxscxvkijle4g2crofr86pzhp04c3bqjphcwr4fqpzurbqrdixyhf1buadgea0mjfcsv7tc3n6u32y03myv1imscqmobdkidafxx2s6qe9plj8udjfb4tgzov6qow22s6952djkms287qjyrp66reksrw605rz4ozme9cigwcjad9tw1e7mv3nnl9nl41wwf35g04sixa45u90xrt4ipx0xr1fu8k9yefv4hj5d5ruim1z3ir5654c2vv36zedx3ve3i1i1tpwofsebi8ubcqzav5nbkuwoem85b9mmenxf80p44z98hi1rw62d7xxcpliafbspahc9j4q8q4jdobot43sk602tjujlzo5o33oz9x7fvyyshniqmmo == \z\u\x\e\8\9\a\p\c\r\r\n\g\e\r\8\q\g\4\m\x\k\b\m\g\0\6\a\3\n\i\4\0\4\m\n\n\k\9\3\0\9\u\4\z\z\3\1\q\w\d\9\w\1\s\b\7\4\7\a\r\m\p\i\9\r\q\f\0\x\q\4\x\i\e\m\5\f\g\f\h\2\8\2\2\o\5\6\m\1\p\4\4\0\1\n\o\z\7\x\7\z\w\n\7\l\8\i\8\n\i\n\c\2\f\h\w\a\n\9\d\i\j\9\b\u\r\e\g\7\h\o\4\p\v\x\s\c\x\v\k\i\j\l\e\4\g\2\c\r\o\f\r\8\6\p\z\h\p\0\4\c\3\b\q\j\p\h\c\w\r\4\f\q\p\z\u\r\b\q\r\d\i\x\y\h\f\1\b\u\a\d\g\e\a\0\m\j\f\c\s\v\7\t\c\3\n\6\u\3\2\y\0\3\m\y\v\1\i\m\s\c\q\m\o\b\d\k\i\d\a\f\x\x\2\s\6\q\e\9\p\l\j\8\u\d\j\f\b\4\t\g\z\o\v\6\q\o\w\2\2\s\6\9\5\2\d\j\k\m\s\2\8\7\q\j\y\r\p\6\6\r\e\k\s\r\w\6\0\5\r\z\4\o\z\m\e\9\c\i\g\w\c\j\a\d\9\t\w\1\e\7\m\v\3\n\n\l\9\n\l\4\1\w\w\f\3\5\g\0\4\s\i\x\a\4\5\u\9\0\x\r\t\4\i\p\x\0\x\r\1\f\u\8\k\9\y\e\f\v\4\h\j\5\d\5\r\u\i\m\1\z\3\i\r\5\6\5\4\c\2\v\v\3\6\z\e\d\x\3\v\e\3\i\1\i\1\t\p\w\o\f\s\e\b\i\8\u\b\c\q\z\a\v\5\n\b\k\u\w\o\e\m\8\5\b\9\m\m\e\n\x\f\8\0\p\4\4\z\9\8\h\i\1\r\w\6\2\d\7\x\x\c\p\l\i\a\f\b\s\p\a\h\c\9\j\4\q\8\q\4\j\d\o\b\o\t\4\3\s\k\6\0\2\t\j\u\j\l\z\o\5\o\3\3\o\z\9\x\7\f\v\y\y\s\h\n\i\q\m\m\o ]] 00:06:05.400 16:51:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:05.400 16:51:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:05.400 [2024-11-20 16:51:23.844710] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:05.400 [2024-11-20 16:51:23.844811] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60329 ] 00:06:05.703 [2024-11-20 16:51:23.988521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.703 [2024-11-20 16:51:24.044941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.703 [2024-11-20 16:51:24.100780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.703  [2024-11-20T16:51:24.500Z] Copying: 512/512 [B] (average 166 kBps) 00:06:05.964 00:06:05.964 16:51:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ zuxe89apcrrnger8qg4mxkbmg06a3ni404mnnk9309u4zz31qwd9w1sb747armpi9rqf0xq4xiem5fgfh2822o56m1p4401noz7x7zwn7l8i8ninc2fhwan9dij9bureg7ho4pvxscxvkijle4g2crofr86pzhp04c3bqjphcwr4fqpzurbqrdixyhf1buadgea0mjfcsv7tc3n6u32y03myv1imscqmobdkidafxx2s6qe9plj8udjfb4tgzov6qow22s6952djkms287qjyrp66reksrw605rz4ozme9cigwcjad9tw1e7mv3nnl9nl41wwf35g04sixa45u90xrt4ipx0xr1fu8k9yefv4hj5d5ruim1z3ir5654c2vv36zedx3ve3i1i1tpwofsebi8ubcqzav5nbkuwoem85b9mmenxf80p44z98hi1rw62d7xxcpliafbspahc9j4q8q4jdobot43sk602tjujlzo5o33oz9x7fvyyshniqmmo == \z\u\x\e\8\9\a\p\c\r\r\n\g\e\r\8\q\g\4\m\x\k\b\m\g\0\6\a\3\n\i\4\0\4\m\n\n\k\9\3\0\9\u\4\z\z\3\1\q\w\d\9\w\1\s\b\7\4\7\a\r\m\p\i\9\r\q\f\0\x\q\4\x\i\e\m\5\f\g\f\h\2\8\2\2\o\5\6\m\1\p\4\4\0\1\n\o\z\7\x\7\z\w\n\7\l\8\i\8\n\i\n\c\2\f\h\w\a\n\9\d\i\j\9\b\u\r\e\g\7\h\o\4\p\v\x\s\c\x\v\k\i\j\l\e\4\g\2\c\r\o\f\r\8\6\p\z\h\p\0\4\c\3\b\q\j\p\h\c\w\r\4\f\q\p\z\u\r\b\q\r\d\i\x\y\h\f\1\b\u\a\d\g\e\a\0\m\j\f\c\s\v\7\t\c\3\n\6\u\3\2\y\0\3\m\y\v\1\i\m\s\c\q\m\o\b\d\k\i\d\a\f\x\x\2\s\6\q\e\9\p\l\j\8\u\d\j\f\b\4\t\g\z\o\v\6\q\o\w\2\2\s\6\9\5\2\d\j\k\m\s\2\8\7\q\j\y\r\p\6\6\r\e\k\s\r\w\6\0\5\r\z\4\o\z\m\e\9\c\i\g\w\c\j\a\d\9\t\w\1\e\7\m\v\3\n\n\l\9\n\l\4\1\w\w\f\3\5\g\0\4\s\i\x\a\4\5\u\9\0\x\r\t\4\i\p\x\0\x\r\1\f\u\8\k\9\y\e\f\v\4\h\j\5\d\5\r\u\i\m\1\z\3\i\r\5\6\5\4\c\2\v\v\3\6\z\e\d\x\3\v\e\3\i\1\i\1\t\p\w\o\f\s\e\b\i\8\u\b\c\q\z\a\v\5\n\b\k\u\w\o\e\m\8\5\b\9\m\m\e\n\x\f\8\0\p\4\4\z\9\8\h\i\1\r\w\6\2\d\7\x\x\c\p\l\i\a\f\b\s\p\a\h\c\9\j\4\q\8\q\4\j\d\o\b\o\t\4\3\s\k\6\0\2\t\j\u\j\l\z\o\5\o\3\3\o\z\9\x\7\f\v\y\y\s\h\n\i\q\m\m\o ]] 00:06:05.964 16:51:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:05.964 16:51:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:05.964 16:51:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:05.964 16:51:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:05.964 16:51:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:05.964 16:51:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:05.964 [2024-11-20 16:51:24.404315] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:05.964 [2024-11-20 16:51:24.404407] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60338 ] 00:06:06.223 [2024-11-20 16:51:24.547836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.223 [2024-11-20 16:51:24.607830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.223 [2024-11-20 16:51:24.665919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.223  [2024-11-20T16:51:25.018Z] Copying: 512/512 [B] (average 500 kBps) 00:06:06.482 00:06:06.482 16:51:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 6arg4pbahh8gat551hnl4g4adx3i8n0ok8rkpf3r8zmygjcqm2stzvkc4fe7ztmkxw357nrrb9zif3gehindg6t852m7gbvbsqgabbya6o05alvjiwfgxgknov909zfxn4rvs9wamu5tdv387ick1rytbzj0u069w5s1wk6957d08r1cffaepi2yvbudidgken7kobdm1yviswupeww3128ugtvdwhv83w9mb5nwyzfzar1tcoesg3qg7c9aoornmrms0aiekffwto6ovtkbffi4acx2xvksbdgp4bvzmqhr8akl8ld68ka7m1414b7otxrpm01czrflapgw3uemd54jrti84zcxusxqodmk1894a7qsf51sm1ju9ukeun72pvsqo5w9pqwg23moiwyp0186zwt8iypyet19k8zrrl54gpn8sg3v5egudw4zlstm2602vpgzy4m6vl3v5059coyp5lw0h9hexie0249bva4zn2kwplgft38u8feu1cis == \6\a\r\g\4\p\b\a\h\h\8\g\a\t\5\5\1\h\n\l\4\g\4\a\d\x\3\i\8\n\0\o\k\8\r\k\p\f\3\r\8\z\m\y\g\j\c\q\m\2\s\t\z\v\k\c\4\f\e\7\z\t\m\k\x\w\3\5\7\n\r\r\b\9\z\i\f\3\g\e\h\i\n\d\g\6\t\8\5\2\m\7\g\b\v\b\s\q\g\a\b\b\y\a\6\o\0\5\a\l\v\j\i\w\f\g\x\g\k\n\o\v\9\0\9\z\f\x\n\4\r\v\s\9\w\a\m\u\5\t\d\v\3\8\7\i\c\k\1\r\y\t\b\z\j\0\u\0\6\9\w\5\s\1\w\k\6\9\5\7\d\0\8\r\1\c\f\f\a\e\p\i\2\y\v\b\u\d\i\d\g\k\e\n\7\k\o\b\d\m\1\y\v\i\s\w\u\p\e\w\w\3\1\2\8\u\g\t\v\d\w\h\v\8\3\w\9\m\b\5\n\w\y\z\f\z\a\r\1\t\c\o\e\s\g\3\q\g\7\c\9\a\o\o\r\n\m\r\m\s\0\a\i\e\k\f\f\w\t\o\6\o\v\t\k\b\f\f\i\4\a\c\x\2\x\v\k\s\b\d\g\p\4\b\v\z\m\q\h\r\8\a\k\l\8\l\d\6\8\k\a\7\m\1\4\1\4\b\7\o\t\x\r\p\m\0\1\c\z\r\f\l\a\p\g\w\3\u\e\m\d\5\4\j\r\t\i\8\4\z\c\x\u\s\x\q\o\d\m\k\1\8\9\4\a\7\q\s\f\5\1\s\m\1\j\u\9\u\k\e\u\n\7\2\p\v\s\q\o\5\w\9\p\q\w\g\2\3\m\o\i\w\y\p\0\1\8\6\z\w\t\8\i\y\p\y\e\t\1\9\k\8\z\r\r\l\5\4\g\p\n\8\s\g\3\v\5\e\g\u\d\w\4\z\l\s\t\m\2\6\0\2\v\p\g\z\y\4\m\6\v\l\3\v\5\0\5\9\c\o\y\p\5\l\w\0\h\9\h\e\x\i\e\0\2\4\9\b\v\a\4\z\n\2\k\w\p\l\g\f\t\3\8\u\8\f\e\u\1\c\i\s ]] 00:06:06.482 16:51:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:06.482 16:51:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:06.482 [2024-11-20 16:51:24.958107] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:06.482 [2024-11-20 16:51:24.958251] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60348 ] 00:06:06.742 [2024-11-20 16:51:25.104956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.742 [2024-11-20 16:51:25.167414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.742 [2024-11-20 16:51:25.223998] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.742  [2024-11-20T16:51:25.536Z] Copying: 512/512 [B] (average 500 kBps) 00:06:07.000 00:06:07.000 16:51:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 6arg4pbahh8gat551hnl4g4adx3i8n0ok8rkpf3r8zmygjcqm2stzvkc4fe7ztmkxw357nrrb9zif3gehindg6t852m7gbvbsqgabbya6o05alvjiwfgxgknov909zfxn4rvs9wamu5tdv387ick1rytbzj0u069w5s1wk6957d08r1cffaepi2yvbudidgken7kobdm1yviswupeww3128ugtvdwhv83w9mb5nwyzfzar1tcoesg3qg7c9aoornmrms0aiekffwto6ovtkbffi4acx2xvksbdgp4bvzmqhr8akl8ld68ka7m1414b7otxrpm01czrflapgw3uemd54jrti84zcxusxqodmk1894a7qsf51sm1ju9ukeun72pvsqo5w9pqwg23moiwyp0186zwt8iypyet19k8zrrl54gpn8sg3v5egudw4zlstm2602vpgzy4m6vl3v5059coyp5lw0h9hexie0249bva4zn2kwplgft38u8feu1cis == \6\a\r\g\4\p\b\a\h\h\8\g\a\t\5\5\1\h\n\l\4\g\4\a\d\x\3\i\8\n\0\o\k\8\r\k\p\f\3\r\8\z\m\y\g\j\c\q\m\2\s\t\z\v\k\c\4\f\e\7\z\t\m\k\x\w\3\5\7\n\r\r\b\9\z\i\f\3\g\e\h\i\n\d\g\6\t\8\5\2\m\7\g\b\v\b\s\q\g\a\b\b\y\a\6\o\0\5\a\l\v\j\i\w\f\g\x\g\k\n\o\v\9\0\9\z\f\x\n\4\r\v\s\9\w\a\m\u\5\t\d\v\3\8\7\i\c\k\1\r\y\t\b\z\j\0\u\0\6\9\w\5\s\1\w\k\6\9\5\7\d\0\8\r\1\c\f\f\a\e\p\i\2\y\v\b\u\d\i\d\g\k\e\n\7\k\o\b\d\m\1\y\v\i\s\w\u\p\e\w\w\3\1\2\8\u\g\t\v\d\w\h\v\8\3\w\9\m\b\5\n\w\y\z\f\z\a\r\1\t\c\o\e\s\g\3\q\g\7\c\9\a\o\o\r\n\m\r\m\s\0\a\i\e\k\f\f\w\t\o\6\o\v\t\k\b\f\f\i\4\a\c\x\2\x\v\k\s\b\d\g\p\4\b\v\z\m\q\h\r\8\a\k\l\8\l\d\6\8\k\a\7\m\1\4\1\4\b\7\o\t\x\r\p\m\0\1\c\z\r\f\l\a\p\g\w\3\u\e\m\d\5\4\j\r\t\i\8\4\z\c\x\u\s\x\q\o\d\m\k\1\8\9\4\a\7\q\s\f\5\1\s\m\1\j\u\9\u\k\e\u\n\7\2\p\v\s\q\o\5\w\9\p\q\w\g\2\3\m\o\i\w\y\p\0\1\8\6\z\w\t\8\i\y\p\y\e\t\1\9\k\8\z\r\r\l\5\4\g\p\n\8\s\g\3\v\5\e\g\u\d\w\4\z\l\s\t\m\2\6\0\2\v\p\g\z\y\4\m\6\v\l\3\v\5\0\5\9\c\o\y\p\5\l\w\0\h\9\h\e\x\i\e\0\2\4\9\b\v\a\4\z\n\2\k\w\p\l\g\f\t\3\8\u\8\f\e\u\1\c\i\s ]] 00:06:07.000 16:51:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:07.000 16:51:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:07.000 [2024-11-20 16:51:25.497066] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:07.000 [2024-11-20 16:51:25.497168] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60357 ] 00:06:07.259 [2024-11-20 16:51:25.640328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.259 [2024-11-20 16:51:25.696088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.259 [2024-11-20 16:51:25.749983] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.259  [2024-11-20T16:51:26.053Z] Copying: 512/512 [B] (average 500 kBps) 00:06:07.517 00:06:07.518 16:51:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 6arg4pbahh8gat551hnl4g4adx3i8n0ok8rkpf3r8zmygjcqm2stzvkc4fe7ztmkxw357nrrb9zif3gehindg6t852m7gbvbsqgabbya6o05alvjiwfgxgknov909zfxn4rvs9wamu5tdv387ick1rytbzj0u069w5s1wk6957d08r1cffaepi2yvbudidgken7kobdm1yviswupeww3128ugtvdwhv83w9mb5nwyzfzar1tcoesg3qg7c9aoornmrms0aiekffwto6ovtkbffi4acx2xvksbdgp4bvzmqhr8akl8ld68ka7m1414b7otxrpm01czrflapgw3uemd54jrti84zcxusxqodmk1894a7qsf51sm1ju9ukeun72pvsqo5w9pqwg23moiwyp0186zwt8iypyet19k8zrrl54gpn8sg3v5egudw4zlstm2602vpgzy4m6vl3v5059coyp5lw0h9hexie0249bva4zn2kwplgft38u8feu1cis == \6\a\r\g\4\p\b\a\h\h\8\g\a\t\5\5\1\h\n\l\4\g\4\a\d\x\3\i\8\n\0\o\k\8\r\k\p\f\3\r\8\z\m\y\g\j\c\q\m\2\s\t\z\v\k\c\4\f\e\7\z\t\m\k\x\w\3\5\7\n\r\r\b\9\z\i\f\3\g\e\h\i\n\d\g\6\t\8\5\2\m\7\g\b\v\b\s\q\g\a\b\b\y\a\6\o\0\5\a\l\v\j\i\w\f\g\x\g\k\n\o\v\9\0\9\z\f\x\n\4\r\v\s\9\w\a\m\u\5\t\d\v\3\8\7\i\c\k\1\r\y\t\b\z\j\0\u\0\6\9\w\5\s\1\w\k\6\9\5\7\d\0\8\r\1\c\f\f\a\e\p\i\2\y\v\b\u\d\i\d\g\k\e\n\7\k\o\b\d\m\1\y\v\i\s\w\u\p\e\w\w\3\1\2\8\u\g\t\v\d\w\h\v\8\3\w\9\m\b\5\n\w\y\z\f\z\a\r\1\t\c\o\e\s\g\3\q\g\7\c\9\a\o\o\r\n\m\r\m\s\0\a\i\e\k\f\f\w\t\o\6\o\v\t\k\b\f\f\i\4\a\c\x\2\x\v\k\s\b\d\g\p\4\b\v\z\m\q\h\r\8\a\k\l\8\l\d\6\8\k\a\7\m\1\4\1\4\b\7\o\t\x\r\p\m\0\1\c\z\r\f\l\a\p\g\w\3\u\e\m\d\5\4\j\r\t\i\8\4\z\c\x\u\s\x\q\o\d\m\k\1\8\9\4\a\7\q\s\f\5\1\s\m\1\j\u\9\u\k\e\u\n\7\2\p\v\s\q\o\5\w\9\p\q\w\g\2\3\m\o\i\w\y\p\0\1\8\6\z\w\t\8\i\y\p\y\e\t\1\9\k\8\z\r\r\l\5\4\g\p\n\8\s\g\3\v\5\e\g\u\d\w\4\z\l\s\t\m\2\6\0\2\v\p\g\z\y\4\m\6\v\l\3\v\5\0\5\9\c\o\y\p\5\l\w\0\h\9\h\e\x\i\e\0\2\4\9\b\v\a\4\z\n\2\k\w\p\l\g\f\t\3\8\u\8\f\e\u\1\c\i\s ]] 00:06:07.518 16:51:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:07.518 16:51:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:07.518 [2024-11-20 16:51:26.036454] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:07.518 [2024-11-20 16:51:26.036559] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60367 ] 00:06:07.776 [2024-11-20 16:51:26.182482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.776 [2024-11-20 16:51:26.246403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.776 [2024-11-20 16:51:26.301805] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.036  [2024-11-20T16:51:26.572Z] Copying: 512/512 [B] (average 166 kBps) 00:06:08.036 00:06:08.036 16:51:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 6arg4pbahh8gat551hnl4g4adx3i8n0ok8rkpf3r8zmygjcqm2stzvkc4fe7ztmkxw357nrrb9zif3gehindg6t852m7gbvbsqgabbya6o05alvjiwfgxgknov909zfxn4rvs9wamu5tdv387ick1rytbzj0u069w5s1wk6957d08r1cffaepi2yvbudidgken7kobdm1yviswupeww3128ugtvdwhv83w9mb5nwyzfzar1tcoesg3qg7c9aoornmrms0aiekffwto6ovtkbffi4acx2xvksbdgp4bvzmqhr8akl8ld68ka7m1414b7otxrpm01czrflapgw3uemd54jrti84zcxusxqodmk1894a7qsf51sm1ju9ukeun72pvsqo5w9pqwg23moiwyp0186zwt8iypyet19k8zrrl54gpn8sg3v5egudw4zlstm2602vpgzy4m6vl3v5059coyp5lw0h9hexie0249bva4zn2kwplgft38u8feu1cis == \6\a\r\g\4\p\b\a\h\h\8\g\a\t\5\5\1\h\n\l\4\g\4\a\d\x\3\i\8\n\0\o\k\8\r\k\p\f\3\r\8\z\m\y\g\j\c\q\m\2\s\t\z\v\k\c\4\f\e\7\z\t\m\k\x\w\3\5\7\n\r\r\b\9\z\i\f\3\g\e\h\i\n\d\g\6\t\8\5\2\m\7\g\b\v\b\s\q\g\a\b\b\y\a\6\o\0\5\a\l\v\j\i\w\f\g\x\g\k\n\o\v\9\0\9\z\f\x\n\4\r\v\s\9\w\a\m\u\5\t\d\v\3\8\7\i\c\k\1\r\y\t\b\z\j\0\u\0\6\9\w\5\s\1\w\k\6\9\5\7\d\0\8\r\1\c\f\f\a\e\p\i\2\y\v\b\u\d\i\d\g\k\e\n\7\k\o\b\d\m\1\y\v\i\s\w\u\p\e\w\w\3\1\2\8\u\g\t\v\d\w\h\v\8\3\w\9\m\b\5\n\w\y\z\f\z\a\r\1\t\c\o\e\s\g\3\q\g\7\c\9\a\o\o\r\n\m\r\m\s\0\a\i\e\k\f\f\w\t\o\6\o\v\t\k\b\f\f\i\4\a\c\x\2\x\v\k\s\b\d\g\p\4\b\v\z\m\q\h\r\8\a\k\l\8\l\d\6\8\k\a\7\m\1\4\1\4\b\7\o\t\x\r\p\m\0\1\c\z\r\f\l\a\p\g\w\3\u\e\m\d\5\4\j\r\t\i\8\4\z\c\x\u\s\x\q\o\d\m\k\1\8\9\4\a\7\q\s\f\5\1\s\m\1\j\u\9\u\k\e\u\n\7\2\p\v\s\q\o\5\w\9\p\q\w\g\2\3\m\o\i\w\y\p\0\1\8\6\z\w\t\8\i\y\p\y\e\t\1\9\k\8\z\r\r\l\5\4\g\p\n\8\s\g\3\v\5\e\g\u\d\w\4\z\l\s\t\m\2\6\0\2\v\p\g\z\y\4\m\6\v\l\3\v\5\0\5\9\c\o\y\p\5\l\w\0\h\9\h\e\x\i\e\0\2\4\9\b\v\a\4\z\n\2\k\w\p\l\g\f\t\3\8\u\8\f\e\u\1\c\i\s ]] 00:06:08.036 00:06:08.036 real 0m4.390s 00:06:08.036 user 0m2.410s 00:06:08.036 sys 0m2.234s 00:06:08.036 ************************************ 00:06:08.036 END TEST dd_flags_misc 00:06:08.036 ************************************ 00:06:08.036 16:51:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.036 16:51:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:08.295 16:51:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:08.295 16:51:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:08.295 * Second test run, disabling liburing, forcing AIO 00:06:08.295 16:51:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:08.295 16:51:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:08.295 16:51:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.295 16:51:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.295 16:51:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:08.295 ************************************ 00:06:08.295 START TEST dd_flag_append_forced_aio 00:06:08.295 ************************************ 00:06:08.295 16:51:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:06:08.295 16:51:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:08.295 16:51:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:08.295 16:51:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:08.295 16:51:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:08.295 16:51:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:08.295 16:51:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=hi7hdrb8ywniptnv41xaunmbt3jcjuar 00:06:08.295 16:51:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:08.295 16:51:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:08.295 16:51:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:08.295 16:51:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=504o9jao0fl77c80kzhetws3915t3pvo 00:06:08.295 16:51:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s hi7hdrb8ywniptnv41xaunmbt3jcjuar 00:06:08.295 16:51:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 504o9jao0fl77c80kzhetws3915t3pvo 00:06:08.295 16:51:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:08.295 [2024-11-20 16:51:26.649806] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:08.295 [2024-11-20 16:51:26.649951] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60395 ] 00:06:08.295 [2024-11-20 16:51:26.798050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.554 [2024-11-20 16:51:26.852213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.555 [2024-11-20 16:51:26.908028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.555  [2024-11-20T16:51:27.349Z] Copying: 32/32 [B] (average 31 kBps) 00:06:08.813 00:06:08.813 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 504o9jao0fl77c80kzhetws3915t3pvohi7hdrb8ywniptnv41xaunmbt3jcjuar == \5\0\4\o\9\j\a\o\0\f\l\7\7\c\8\0\k\z\h\e\t\w\s\3\9\1\5\t\3\p\v\o\h\i\7\h\d\r\b\8\y\w\n\i\p\t\n\v\4\1\x\a\u\n\m\b\t\3\j\c\j\u\a\r ]] 00:06:08.813 00:06:08.813 real 0m0.571s 00:06:08.813 user 0m0.305s 00:06:08.813 sys 0m0.144s 00:06:08.813 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.813 ************************************ 00:06:08.813 END TEST dd_flag_append_forced_aio 00:06:08.813 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:08.813 ************************************ 00:06:08.813 16:51:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:08.813 16:51:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.813 16:51:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.813 16:51:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:08.813 ************************************ 00:06:08.813 START TEST dd_flag_directory_forced_aio 00:06:08.813 ************************************ 00:06:08.813 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:06:08.813 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:08.813 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:08.813 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:08.813 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.813 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.814 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.814 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.814 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.814 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.814 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.814 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:08.814 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:08.814 [2024-11-20 16:51:27.267583] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:08.814 [2024-11-20 16:51:27.267692] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60427 ] 00:06:09.073 [2024-11-20 16:51:27.415189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.073 [2024-11-20 16:51:27.476582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.073 [2024-11-20 16:51:27.531506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.073 [2024-11-20 16:51:27.569292] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:09.073 [2024-11-20 16:51:27.569353] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:09.073 [2024-11-20 16:51:27.569387] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.332 [2024-11-20 16:51:27.688110] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:09.332 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:06:09.332 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:09.332 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:06:09.332 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:09.333 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:09.333 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:09.333 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:09.333 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:09.333 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:09.333 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.333 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.333 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.333 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.333 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.333 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.333 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.333 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:09.333 16:51:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:09.333 [2024-11-20 16:51:27.805412] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:09.333 [2024-11-20 16:51:27.805494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60437 ] 00:06:09.591 [2024-11-20 16:51:27.950849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.591 [2024-11-20 16:51:28.002618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.591 [2024-11-20 16:51:28.057355] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.591 [2024-11-20 16:51:28.094478] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:09.591 [2024-11-20 16:51:28.094540] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:09.591 [2024-11-20 16:51:28.094575] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.850 [2024-11-20 16:51:28.212531] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:09.850 00:06:09.850 real 0m1.074s 00:06:09.850 user 0m0.582s 00:06:09.850 sys 0m0.281s 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:09.850 ************************************ 00:06:09.850 END TEST dd_flag_directory_forced_aio 00:06:09.850 ************************************ 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:09.850 ************************************ 00:06:09.850 START TEST dd_flag_nofollow_forced_aio 00:06:09.850 ************************************ 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:09.850 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:10.108 [2024-11-20 16:51:28.391086] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:10.108 [2024-11-20 16:51:28.391173] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60465 ] 00:06:10.108 [2024-11-20 16:51:28.533625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.108 [2024-11-20 16:51:28.582290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.108 [2024-11-20 16:51:28.638212] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.370 [2024-11-20 16:51:28.677456] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:10.370 [2024-11-20 16:51:28.677514] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:10.370 [2024-11-20 16:51:28.677549] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:10.370 [2024-11-20 16:51:28.798372] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:10.370 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:06:10.370 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:10.370 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:06:10.370 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:10.370 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:10.370 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:10.370 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:10.370 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:10.370 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:10.370 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.370 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.370 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.370 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.370 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.370 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.370 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.370 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:10.370 16:51:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:10.629 [2024-11-20 16:51:28.932344] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:10.629 [2024-11-20 16:51:28.932438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60475 ] 00:06:10.629 [2024-11-20 16:51:29.086019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.629 [2024-11-20 16:51:29.144950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.901 [2024-11-20 16:51:29.201993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.901 [2024-11-20 16:51:29.244324] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:10.901 [2024-11-20 16:51:29.244405] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:10.901 [2024-11-20 16:51:29.244439] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:10.901 [2024-11-20 16:51:29.374140] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:11.163 16:51:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:06:11.163 16:51:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.163 16:51:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:06:11.163 16:51:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:11.163 16:51:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:11.163 16:51:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.163 16:51:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:11.163 16:51:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:11.163 16:51:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:11.163 16:51:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.163 [2024-11-20 16:51:29.504210] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:11.163 [2024-11-20 16:51:29.504338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60482 ] 00:06:11.163 [2024-11-20 16:51:29.649526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.421 [2024-11-20 16:51:29.707444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.421 [2024-11-20 16:51:29.765332] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.421  [2024-11-20T16:51:30.216Z] Copying: 512/512 [B] (average 500 kBps) 00:06:11.680 00:06:11.681 16:51:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ dtkv0u7t1nr91ypnf06lzs9znnar901rpw5wpovdwiwnujsu1p7p0o6hskkwkll3prvyh7pgi4ptn6zttei3fit0ucn8roce5wxyrgr4plkxjwpg4ibu5i39r9osi3xfhxx6ld131l0tztejeqlp848y35ik7bcx2dawvmxj36e2ldsneix3ze63387xrx88vzptr0v40dhzc9ns68fyqwh6oclak9dunh3dp82agzr4w6ui4mgvyrggucle6e1mi7iupq8eod9jz9buf7weza8qykoc9q0k8v0j3dgmogh04bqxgl9faphn48kltbhk2pijbsu0ucmntl5qrh69n415pkpz3nmp0547nt31th9cyae2h6wsxfqnznvp6bh5k2uzuomel2ynyso9wiedws7594rv7ngyfuqr7rotj0pdhoiniih09vzwy7dlrwzjur8xil2nlbkezruf8b95vwv7elzqm0fz0uwpi25qbx8wlvzul0letoaqsobkxxeg == \d\t\k\v\0\u\7\t\1\n\r\9\1\y\p\n\f\0\6\l\z\s\9\z\n\n\a\r\9\0\1\r\p\w\5\w\p\o\v\d\w\i\w\n\u\j\s\u\1\p\7\p\0\o\6\h\s\k\k\w\k\l\l\3\p\r\v\y\h\7\p\g\i\4\p\t\n\6\z\t\t\e\i\3\f\i\t\0\u\c\n\8\r\o\c\e\5\w\x\y\r\g\r\4\p\l\k\x\j\w\p\g\4\i\b\u\5\i\3\9\r\9\o\s\i\3\x\f\h\x\x\6\l\d\1\3\1\l\0\t\z\t\e\j\e\q\l\p\8\4\8\y\3\5\i\k\7\b\c\x\2\d\a\w\v\m\x\j\3\6\e\2\l\d\s\n\e\i\x\3\z\e\6\3\3\8\7\x\r\x\8\8\v\z\p\t\r\0\v\4\0\d\h\z\c\9\n\s\6\8\f\y\q\w\h\6\o\c\l\a\k\9\d\u\n\h\3\d\p\8\2\a\g\z\r\4\w\6\u\i\4\m\g\v\y\r\g\g\u\c\l\e\6\e\1\m\i\7\i\u\p\q\8\e\o\d\9\j\z\9\b\u\f\7\w\e\z\a\8\q\y\k\o\c\9\q\0\k\8\v\0\j\3\d\g\m\o\g\h\0\4\b\q\x\g\l\9\f\a\p\h\n\4\8\k\l\t\b\h\k\2\p\i\j\b\s\u\0\u\c\m\n\t\l\5\q\r\h\6\9\n\4\1\5\p\k\p\z\3\n\m\p\0\5\4\7\n\t\3\1\t\h\9\c\y\a\e\2\h\6\w\s\x\f\q\n\z\n\v\p\6\b\h\5\k\2\u\z\u\o\m\e\l\2\y\n\y\s\o\9\w\i\e\d\w\s\7\5\9\4\r\v\7\n\g\y\f\u\q\r\7\r\o\t\j\0\p\d\h\o\i\n\i\i\h\0\9\v\z\w\y\7\d\l\r\w\z\j\u\r\8\x\i\l\2\n\l\b\k\e\z\r\u\f\8\b\9\5\v\w\v\7\e\l\z\q\m\0\f\z\0\u\w\p\i\2\5\q\b\x\8\w\l\v\z\u\l\0\l\e\t\o\a\q\s\o\b\k\x\x\e\g ]] 00:06:11.681 00:06:11.681 real 0m1.695s 00:06:11.681 user 0m0.925s 00:06:11.681 sys 0m0.438s 00:06:11.681 16:51:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.681 16:51:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:11.681 ************************************ 00:06:11.681 END TEST dd_flag_nofollow_forced_aio 00:06:11.681 ************************************ 00:06:11.681 16:51:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:11.681 16:51:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.681 16:51:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.681 16:51:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:11.681 ************************************ 00:06:11.681 START TEST dd_flag_noatime_forced_aio 00:06:11.681 ************************************ 00:06:11.681 16:51:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:06:11.681 16:51:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:11.681 16:51:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:11.681 16:51:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:11.681 16:51:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:11.681 16:51:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:11.681 16:51:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:11.681 16:51:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732121489 00:06:11.681 16:51:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.681 16:51:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732121490 00:06:11.681 16:51:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:12.671 16:51:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:12.671 [2024-11-20 16:51:31.158206] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:12.671 [2024-11-20 16:51:31.158338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60523 ] 00:06:12.945 [2024-11-20 16:51:31.311000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.945 [2024-11-20 16:51:31.377493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.945 [2024-11-20 16:51:31.436317] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.945  [2024-11-20T16:51:31.740Z] Copying: 512/512 [B] (average 500 kBps) 00:06:13.204 00:06:13.204 16:51:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:13.204 16:51:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732121489 )) 00:06:13.204 16:51:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:13.204 16:51:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732121490 )) 00:06:13.204 16:51:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:13.462 [2024-11-20 16:51:31.744594] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:13.462 [2024-11-20 16:51:31.744718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60534 ] 00:06:13.462 [2024-11-20 16:51:31.887389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.462 [2024-11-20 16:51:31.949487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.720 [2024-11-20 16:51:32.005051] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.720  [2024-11-20T16:51:32.256Z] Copying: 512/512 [B] (average 500 kBps) 00:06:13.720 00:06:13.979 16:51:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:13.979 16:51:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732121492 )) 00:06:13.979 00:06:13.979 real 0m2.181s 00:06:13.979 user 0m0.642s 00:06:13.979 sys 0m0.300s 00:06:13.979 16:51:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.979 16:51:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:13.979 ************************************ 00:06:13.979 END TEST dd_flag_noatime_forced_aio 00:06:13.979 ************************************ 00:06:13.979 16:51:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:13.979 16:51:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.979 16:51:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.979 16:51:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:13.979 ************************************ 00:06:13.979 START TEST dd_flags_misc_forced_aio 00:06:13.979 ************************************ 00:06:13.979 16:51:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:06:13.979 16:51:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:13.979 16:51:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:13.979 16:51:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:13.979 16:51:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:13.979 16:51:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:13.979 16:51:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:13.979 16:51:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:13.979 16:51:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:13.979 16:51:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:13.979 [2024-11-20 16:51:32.401848] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:13.979 [2024-11-20 16:51:32.401987] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60561 ] 00:06:14.238 [2024-11-20 16:51:32.558207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.238 [2024-11-20 16:51:32.617447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.238 [2024-11-20 16:51:32.672633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.238  [2024-11-20T16:51:33.032Z] Copying: 512/512 [B] (average 500 kBps) 00:06:14.496 00:06:14.496 16:51:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 9gvi0vx39v9ww6zsuk4m9cbz1th5ez8f7sdg81fhm8lcywcf6cf87s09lp954dgqcg1w9h4srducawmdmm25x977bm82gkycymzli9jm6xez7n9eq7yweelrm30kt5109hqu5rxhdohyww9147hxwvqxeky3n7a8qrdarsg30jkjygexvs8fom6cseesqlf7sva2yyvjteml9pg6d7qjus26ivfqhd0v8kzsi3ph5bg8uz05px1c3bbazvqb4b39tkq10a9wrz8pbjt6blwakkz6if6pruzu9oi6cm0khpemjqdbcaabi8fie12nbaehcxgsb3tlq7yyied9bvjk51avr8wzkr8h8tb8rrg1b5z7ua2ih5y4b0lx80olpt597szzpgkoblqhzzraaitkd05822xe70obtf3cdv1ad6wod2wlpbtoi3cg0iwthw0xwd7nb7snbxygyosbtiffn9gfol120e6bwxvsu366xlmaj8t4x6n68f7p4zh87paz == \9\g\v\i\0\v\x\3\9\v\9\w\w\6\z\s\u\k\4\m\9\c\b\z\1\t\h\5\e\z\8\f\7\s\d\g\8\1\f\h\m\8\l\c\y\w\c\f\6\c\f\8\7\s\0\9\l\p\9\5\4\d\g\q\c\g\1\w\9\h\4\s\r\d\u\c\a\w\m\d\m\m\2\5\x\9\7\7\b\m\8\2\g\k\y\c\y\m\z\l\i\9\j\m\6\x\e\z\7\n\9\e\q\7\y\w\e\e\l\r\m\3\0\k\t\5\1\0\9\h\q\u\5\r\x\h\d\o\h\y\w\w\9\1\4\7\h\x\w\v\q\x\e\k\y\3\n\7\a\8\q\r\d\a\r\s\g\3\0\j\k\j\y\g\e\x\v\s\8\f\o\m\6\c\s\e\e\s\q\l\f\7\s\v\a\2\y\y\v\j\t\e\m\l\9\p\g\6\d\7\q\j\u\s\2\6\i\v\f\q\h\d\0\v\8\k\z\s\i\3\p\h\5\b\g\8\u\z\0\5\p\x\1\c\3\b\b\a\z\v\q\b\4\b\3\9\t\k\q\1\0\a\9\w\r\z\8\p\b\j\t\6\b\l\w\a\k\k\z\6\i\f\6\p\r\u\z\u\9\o\i\6\c\m\0\k\h\p\e\m\j\q\d\b\c\a\a\b\i\8\f\i\e\1\2\n\b\a\e\h\c\x\g\s\b\3\t\l\q\7\y\y\i\e\d\9\b\v\j\k\5\1\a\v\r\8\w\z\k\r\8\h\8\t\b\8\r\r\g\1\b\5\z\7\u\a\2\i\h\5\y\4\b\0\l\x\8\0\o\l\p\t\5\9\7\s\z\z\p\g\k\o\b\l\q\h\z\z\r\a\a\i\t\k\d\0\5\8\2\2\x\e\7\0\o\b\t\f\3\c\d\v\1\a\d\6\w\o\d\2\w\l\p\b\t\o\i\3\c\g\0\i\w\t\h\w\0\x\w\d\7\n\b\7\s\n\b\x\y\g\y\o\s\b\t\i\f\f\n\9\g\f\o\l\1\2\0\e\6\b\w\x\v\s\u\3\6\6\x\l\m\a\j\8\t\4\x\6\n\6\8\f\7\p\4\z\h\8\7\p\a\z ]] 00:06:14.496 16:51:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:14.496 16:51:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:14.496 [2024-11-20 16:51:32.988270] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:14.496 [2024-11-20 16:51:32.988376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60574 ] 00:06:14.755 [2024-11-20 16:51:33.133876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.755 [2024-11-20 16:51:33.190352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.755 [2024-11-20 16:51:33.244690] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.755  [2024-11-20T16:51:33.549Z] Copying: 512/512 [B] (average 500 kBps) 00:06:15.013 00:06:15.013 16:51:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 9gvi0vx39v9ww6zsuk4m9cbz1th5ez8f7sdg81fhm8lcywcf6cf87s09lp954dgqcg1w9h4srducawmdmm25x977bm82gkycymzli9jm6xez7n9eq7yweelrm30kt5109hqu5rxhdohyww9147hxwvqxeky3n7a8qrdarsg30jkjygexvs8fom6cseesqlf7sva2yyvjteml9pg6d7qjus26ivfqhd0v8kzsi3ph5bg8uz05px1c3bbazvqb4b39tkq10a9wrz8pbjt6blwakkz6if6pruzu9oi6cm0khpemjqdbcaabi8fie12nbaehcxgsb3tlq7yyied9bvjk51avr8wzkr8h8tb8rrg1b5z7ua2ih5y4b0lx80olpt597szzpgkoblqhzzraaitkd05822xe70obtf3cdv1ad6wod2wlpbtoi3cg0iwthw0xwd7nb7snbxygyosbtiffn9gfol120e6bwxvsu366xlmaj8t4x6n68f7p4zh87paz == \9\g\v\i\0\v\x\3\9\v\9\w\w\6\z\s\u\k\4\m\9\c\b\z\1\t\h\5\e\z\8\f\7\s\d\g\8\1\f\h\m\8\l\c\y\w\c\f\6\c\f\8\7\s\0\9\l\p\9\5\4\d\g\q\c\g\1\w\9\h\4\s\r\d\u\c\a\w\m\d\m\m\2\5\x\9\7\7\b\m\8\2\g\k\y\c\y\m\z\l\i\9\j\m\6\x\e\z\7\n\9\e\q\7\y\w\e\e\l\r\m\3\0\k\t\5\1\0\9\h\q\u\5\r\x\h\d\o\h\y\w\w\9\1\4\7\h\x\w\v\q\x\e\k\y\3\n\7\a\8\q\r\d\a\r\s\g\3\0\j\k\j\y\g\e\x\v\s\8\f\o\m\6\c\s\e\e\s\q\l\f\7\s\v\a\2\y\y\v\j\t\e\m\l\9\p\g\6\d\7\q\j\u\s\2\6\i\v\f\q\h\d\0\v\8\k\z\s\i\3\p\h\5\b\g\8\u\z\0\5\p\x\1\c\3\b\b\a\z\v\q\b\4\b\3\9\t\k\q\1\0\a\9\w\r\z\8\p\b\j\t\6\b\l\w\a\k\k\z\6\i\f\6\p\r\u\z\u\9\o\i\6\c\m\0\k\h\p\e\m\j\q\d\b\c\a\a\b\i\8\f\i\e\1\2\n\b\a\e\h\c\x\g\s\b\3\t\l\q\7\y\y\i\e\d\9\b\v\j\k\5\1\a\v\r\8\w\z\k\r\8\h\8\t\b\8\r\r\g\1\b\5\z\7\u\a\2\i\h\5\y\4\b\0\l\x\8\0\o\l\p\t\5\9\7\s\z\z\p\g\k\o\b\l\q\h\z\z\r\a\a\i\t\k\d\0\5\8\2\2\x\e\7\0\o\b\t\f\3\c\d\v\1\a\d\6\w\o\d\2\w\l\p\b\t\o\i\3\c\g\0\i\w\t\h\w\0\x\w\d\7\n\b\7\s\n\b\x\y\g\y\o\s\b\t\i\f\f\n\9\g\f\o\l\1\2\0\e\6\b\w\x\v\s\u\3\6\6\x\l\m\a\j\8\t\4\x\6\n\6\8\f\7\p\4\z\h\8\7\p\a\z ]] 00:06:15.013 16:51:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:15.013 16:51:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:15.013 [2024-11-20 16:51:33.546491] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:15.013 [2024-11-20 16:51:33.546604] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60581 ] 00:06:15.272 [2024-11-20 16:51:33.687642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.272 [2024-11-20 16:51:33.749167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.272 [2024-11-20 16:51:33.802979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.530  [2024-11-20T16:51:34.066Z] Copying: 512/512 [B] (average 166 kBps) 00:06:15.530 00:06:15.530 16:51:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 9gvi0vx39v9ww6zsuk4m9cbz1th5ez8f7sdg81fhm8lcywcf6cf87s09lp954dgqcg1w9h4srducawmdmm25x977bm82gkycymzli9jm6xez7n9eq7yweelrm30kt5109hqu5rxhdohyww9147hxwvqxeky3n7a8qrdarsg30jkjygexvs8fom6cseesqlf7sva2yyvjteml9pg6d7qjus26ivfqhd0v8kzsi3ph5bg8uz05px1c3bbazvqb4b39tkq10a9wrz8pbjt6blwakkz6if6pruzu9oi6cm0khpemjqdbcaabi8fie12nbaehcxgsb3tlq7yyied9bvjk51avr8wzkr8h8tb8rrg1b5z7ua2ih5y4b0lx80olpt597szzpgkoblqhzzraaitkd05822xe70obtf3cdv1ad6wod2wlpbtoi3cg0iwthw0xwd7nb7snbxygyosbtiffn9gfol120e6bwxvsu366xlmaj8t4x6n68f7p4zh87paz == \9\g\v\i\0\v\x\3\9\v\9\w\w\6\z\s\u\k\4\m\9\c\b\z\1\t\h\5\e\z\8\f\7\s\d\g\8\1\f\h\m\8\l\c\y\w\c\f\6\c\f\8\7\s\0\9\l\p\9\5\4\d\g\q\c\g\1\w\9\h\4\s\r\d\u\c\a\w\m\d\m\m\2\5\x\9\7\7\b\m\8\2\g\k\y\c\y\m\z\l\i\9\j\m\6\x\e\z\7\n\9\e\q\7\y\w\e\e\l\r\m\3\0\k\t\5\1\0\9\h\q\u\5\r\x\h\d\o\h\y\w\w\9\1\4\7\h\x\w\v\q\x\e\k\y\3\n\7\a\8\q\r\d\a\r\s\g\3\0\j\k\j\y\g\e\x\v\s\8\f\o\m\6\c\s\e\e\s\q\l\f\7\s\v\a\2\y\y\v\j\t\e\m\l\9\p\g\6\d\7\q\j\u\s\2\6\i\v\f\q\h\d\0\v\8\k\z\s\i\3\p\h\5\b\g\8\u\z\0\5\p\x\1\c\3\b\b\a\z\v\q\b\4\b\3\9\t\k\q\1\0\a\9\w\r\z\8\p\b\j\t\6\b\l\w\a\k\k\z\6\i\f\6\p\r\u\z\u\9\o\i\6\c\m\0\k\h\p\e\m\j\q\d\b\c\a\a\b\i\8\f\i\e\1\2\n\b\a\e\h\c\x\g\s\b\3\t\l\q\7\y\y\i\e\d\9\b\v\j\k\5\1\a\v\r\8\w\z\k\r\8\h\8\t\b\8\r\r\g\1\b\5\z\7\u\a\2\i\h\5\y\4\b\0\l\x\8\0\o\l\p\t\5\9\7\s\z\z\p\g\k\o\b\l\q\h\z\z\r\a\a\i\t\k\d\0\5\8\2\2\x\e\7\0\o\b\t\f\3\c\d\v\1\a\d\6\w\o\d\2\w\l\p\b\t\o\i\3\c\g\0\i\w\t\h\w\0\x\w\d\7\n\b\7\s\n\b\x\y\g\y\o\s\b\t\i\f\f\n\9\g\f\o\l\1\2\0\e\6\b\w\x\v\s\u\3\6\6\x\l\m\a\j\8\t\4\x\6\n\6\8\f\7\p\4\z\h\8\7\p\a\z ]] 00:06:15.530 16:51:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:15.530 16:51:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:15.789 [2024-11-20 16:51:34.117018] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:15.789 [2024-11-20 16:51:34.117122] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60589 ] 00:06:15.789 [2024-11-20 16:51:34.260669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.789 [2024-11-20 16:51:34.319057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.049 [2024-11-20 16:51:34.374057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.049  [2024-11-20T16:51:34.844Z] Copying: 512/512 [B] (average 500 kBps) 00:06:16.308 00:06:16.308 16:51:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 9gvi0vx39v9ww6zsuk4m9cbz1th5ez8f7sdg81fhm8lcywcf6cf87s09lp954dgqcg1w9h4srducawmdmm25x977bm82gkycymzli9jm6xez7n9eq7yweelrm30kt5109hqu5rxhdohyww9147hxwvqxeky3n7a8qrdarsg30jkjygexvs8fom6cseesqlf7sva2yyvjteml9pg6d7qjus26ivfqhd0v8kzsi3ph5bg8uz05px1c3bbazvqb4b39tkq10a9wrz8pbjt6blwakkz6if6pruzu9oi6cm0khpemjqdbcaabi8fie12nbaehcxgsb3tlq7yyied9bvjk51avr8wzkr8h8tb8rrg1b5z7ua2ih5y4b0lx80olpt597szzpgkoblqhzzraaitkd05822xe70obtf3cdv1ad6wod2wlpbtoi3cg0iwthw0xwd7nb7snbxygyosbtiffn9gfol120e6bwxvsu366xlmaj8t4x6n68f7p4zh87paz == \9\g\v\i\0\v\x\3\9\v\9\w\w\6\z\s\u\k\4\m\9\c\b\z\1\t\h\5\e\z\8\f\7\s\d\g\8\1\f\h\m\8\l\c\y\w\c\f\6\c\f\8\7\s\0\9\l\p\9\5\4\d\g\q\c\g\1\w\9\h\4\s\r\d\u\c\a\w\m\d\m\m\2\5\x\9\7\7\b\m\8\2\g\k\y\c\y\m\z\l\i\9\j\m\6\x\e\z\7\n\9\e\q\7\y\w\e\e\l\r\m\3\0\k\t\5\1\0\9\h\q\u\5\r\x\h\d\o\h\y\w\w\9\1\4\7\h\x\w\v\q\x\e\k\y\3\n\7\a\8\q\r\d\a\r\s\g\3\0\j\k\j\y\g\e\x\v\s\8\f\o\m\6\c\s\e\e\s\q\l\f\7\s\v\a\2\y\y\v\j\t\e\m\l\9\p\g\6\d\7\q\j\u\s\2\6\i\v\f\q\h\d\0\v\8\k\z\s\i\3\p\h\5\b\g\8\u\z\0\5\p\x\1\c\3\b\b\a\z\v\q\b\4\b\3\9\t\k\q\1\0\a\9\w\r\z\8\p\b\j\t\6\b\l\w\a\k\k\z\6\i\f\6\p\r\u\z\u\9\o\i\6\c\m\0\k\h\p\e\m\j\q\d\b\c\a\a\b\i\8\f\i\e\1\2\n\b\a\e\h\c\x\g\s\b\3\t\l\q\7\y\y\i\e\d\9\b\v\j\k\5\1\a\v\r\8\w\z\k\r\8\h\8\t\b\8\r\r\g\1\b\5\z\7\u\a\2\i\h\5\y\4\b\0\l\x\8\0\o\l\p\t\5\9\7\s\z\z\p\g\k\o\b\l\q\h\z\z\r\a\a\i\t\k\d\0\5\8\2\2\x\e\7\0\o\b\t\f\3\c\d\v\1\a\d\6\w\o\d\2\w\l\p\b\t\o\i\3\c\g\0\i\w\t\h\w\0\x\w\d\7\n\b\7\s\n\b\x\y\g\y\o\s\b\t\i\f\f\n\9\g\f\o\l\1\2\0\e\6\b\w\x\v\s\u\3\6\6\x\l\m\a\j\8\t\4\x\6\n\6\8\f\7\p\4\z\h\8\7\p\a\z ]] 00:06:16.308 16:51:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:16.308 16:51:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:16.308 16:51:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:16.308 16:51:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:16.308 16:51:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:16.308 16:51:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:16.308 [2024-11-20 16:51:34.697017] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:16.308 [2024-11-20 16:51:34.697138] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60596 ] 00:06:16.567 [2024-11-20 16:51:34.842914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.567 [2024-11-20 16:51:34.906703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.567 [2024-11-20 16:51:34.962297] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.567  [2024-11-20T16:51:35.361Z] Copying: 512/512 [B] (average 500 kBps) 00:06:16.826 00:06:16.826 16:51:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ c99t7ky8yxwmp8n4d281i3qwh91mqgmmuzuekygwvc4255mo54ytpnbl4l6e84pj6r51foz5svfrrwd71mvrr8gnukjetss7foc09a9rfxgb8znltes57leeqzk51ji7mhwh2t0ioxf9x15ocx3k0amn6zy44dojwkn22ck0vfyove0u0qenzukz5dzqwwb0bshggw5rljdpdzsvkb8apujlheqldnh73vjtccbzuwtittgeh3d00vpaculdohk84vkjhuqdbs2b3q42eouh4scifgxccj71otxqu4rt3akjce82p0m2ztiocn6udqvwappknrloavb9yrhaobqv3a1r4ziybfevdn7hpdboef22hrkvtohaosfh67ab1g2pn1edhlrqm9qoacmt8ep3d74pbnszhv9gj6u03sfjlhtb99qlobdtotl4bltxsr5f915e80advvwr9m3pssbb4ezduu3cjqo558hpr3m257os8clzz61dfl8utwymbh4j == \c\9\9\t\7\k\y\8\y\x\w\m\p\8\n\4\d\2\8\1\i\3\q\w\h\9\1\m\q\g\m\m\u\z\u\e\k\y\g\w\v\c\4\2\5\5\m\o\5\4\y\t\p\n\b\l\4\l\6\e\8\4\p\j\6\r\5\1\f\o\z\5\s\v\f\r\r\w\d\7\1\m\v\r\r\8\g\n\u\k\j\e\t\s\s\7\f\o\c\0\9\a\9\r\f\x\g\b\8\z\n\l\t\e\s\5\7\l\e\e\q\z\k\5\1\j\i\7\m\h\w\h\2\t\0\i\o\x\f\9\x\1\5\o\c\x\3\k\0\a\m\n\6\z\y\4\4\d\o\j\w\k\n\2\2\c\k\0\v\f\y\o\v\e\0\u\0\q\e\n\z\u\k\z\5\d\z\q\w\w\b\0\b\s\h\g\g\w\5\r\l\j\d\p\d\z\s\v\k\b\8\a\p\u\j\l\h\e\q\l\d\n\h\7\3\v\j\t\c\c\b\z\u\w\t\i\t\t\g\e\h\3\d\0\0\v\p\a\c\u\l\d\o\h\k\8\4\v\k\j\h\u\q\d\b\s\2\b\3\q\4\2\e\o\u\h\4\s\c\i\f\g\x\c\c\j\7\1\o\t\x\q\u\4\r\t\3\a\k\j\c\e\8\2\p\0\m\2\z\t\i\o\c\n\6\u\d\q\v\w\a\p\p\k\n\r\l\o\a\v\b\9\y\r\h\a\o\b\q\v\3\a\1\r\4\z\i\y\b\f\e\v\d\n\7\h\p\d\b\o\e\f\2\2\h\r\k\v\t\o\h\a\o\s\f\h\6\7\a\b\1\g\2\p\n\1\e\d\h\l\r\q\m\9\q\o\a\c\m\t\8\e\p\3\d\7\4\p\b\n\s\z\h\v\9\g\j\6\u\0\3\s\f\j\l\h\t\b\9\9\q\l\o\b\d\t\o\t\l\4\b\l\t\x\s\r\5\f\9\1\5\e\8\0\a\d\v\v\w\r\9\m\3\p\s\s\b\b\4\e\z\d\u\u\3\c\j\q\o\5\5\8\h\p\r\3\m\2\5\7\o\s\8\c\l\z\z\6\1\d\f\l\8\u\t\w\y\m\b\h\4\j ]] 00:06:16.826 16:51:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:16.826 16:51:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:16.826 [2024-11-20 16:51:35.276690] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:16.826 [2024-11-20 16:51:35.276815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60604 ] 00:06:17.084 [2024-11-20 16:51:35.423501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.084 [2024-11-20 16:51:35.485978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.084 [2024-11-20 16:51:35.542213] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.084  [2024-11-20T16:51:35.879Z] Copying: 512/512 [B] (average 500 kBps) 00:06:17.343 00:06:17.343 16:51:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ c99t7ky8yxwmp8n4d281i3qwh91mqgmmuzuekygwvc4255mo54ytpnbl4l6e84pj6r51foz5svfrrwd71mvrr8gnukjetss7foc09a9rfxgb8znltes57leeqzk51ji7mhwh2t0ioxf9x15ocx3k0amn6zy44dojwkn22ck0vfyove0u0qenzukz5dzqwwb0bshggw5rljdpdzsvkb8apujlheqldnh73vjtccbzuwtittgeh3d00vpaculdohk84vkjhuqdbs2b3q42eouh4scifgxccj71otxqu4rt3akjce82p0m2ztiocn6udqvwappknrloavb9yrhaobqv3a1r4ziybfevdn7hpdboef22hrkvtohaosfh67ab1g2pn1edhlrqm9qoacmt8ep3d74pbnszhv9gj6u03sfjlhtb99qlobdtotl4bltxsr5f915e80advvwr9m3pssbb4ezduu3cjqo558hpr3m257os8clzz61dfl8utwymbh4j == \c\9\9\t\7\k\y\8\y\x\w\m\p\8\n\4\d\2\8\1\i\3\q\w\h\9\1\m\q\g\m\m\u\z\u\e\k\y\g\w\v\c\4\2\5\5\m\o\5\4\y\t\p\n\b\l\4\l\6\e\8\4\p\j\6\r\5\1\f\o\z\5\s\v\f\r\r\w\d\7\1\m\v\r\r\8\g\n\u\k\j\e\t\s\s\7\f\o\c\0\9\a\9\r\f\x\g\b\8\z\n\l\t\e\s\5\7\l\e\e\q\z\k\5\1\j\i\7\m\h\w\h\2\t\0\i\o\x\f\9\x\1\5\o\c\x\3\k\0\a\m\n\6\z\y\4\4\d\o\j\w\k\n\2\2\c\k\0\v\f\y\o\v\e\0\u\0\q\e\n\z\u\k\z\5\d\z\q\w\w\b\0\b\s\h\g\g\w\5\r\l\j\d\p\d\z\s\v\k\b\8\a\p\u\j\l\h\e\q\l\d\n\h\7\3\v\j\t\c\c\b\z\u\w\t\i\t\t\g\e\h\3\d\0\0\v\p\a\c\u\l\d\o\h\k\8\4\v\k\j\h\u\q\d\b\s\2\b\3\q\4\2\e\o\u\h\4\s\c\i\f\g\x\c\c\j\7\1\o\t\x\q\u\4\r\t\3\a\k\j\c\e\8\2\p\0\m\2\z\t\i\o\c\n\6\u\d\q\v\w\a\p\p\k\n\r\l\o\a\v\b\9\y\r\h\a\o\b\q\v\3\a\1\r\4\z\i\y\b\f\e\v\d\n\7\h\p\d\b\o\e\f\2\2\h\r\k\v\t\o\h\a\o\s\f\h\6\7\a\b\1\g\2\p\n\1\e\d\h\l\r\q\m\9\q\o\a\c\m\t\8\e\p\3\d\7\4\p\b\n\s\z\h\v\9\g\j\6\u\0\3\s\f\j\l\h\t\b\9\9\q\l\o\b\d\t\o\t\l\4\b\l\t\x\s\r\5\f\9\1\5\e\8\0\a\d\v\v\w\r\9\m\3\p\s\s\b\b\4\e\z\d\u\u\3\c\j\q\o\5\5\8\h\p\r\3\m\2\5\7\o\s\8\c\l\z\z\6\1\d\f\l\8\u\t\w\y\m\b\h\4\j ]] 00:06:17.343 16:51:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:17.343 16:51:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:17.343 [2024-11-20 16:51:35.851975] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:17.343 [2024-11-20 16:51:35.852082] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60611 ] 00:06:17.602 [2024-11-20 16:51:35.999801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.602 [2024-11-20 16:51:36.058396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.602 [2024-11-20 16:51:36.114501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.874  [2024-11-20T16:51:36.410Z] Copying: 512/512 [B] (average 500 kBps) 00:06:17.874 00:06:17.874 16:51:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ c99t7ky8yxwmp8n4d281i3qwh91mqgmmuzuekygwvc4255mo54ytpnbl4l6e84pj6r51foz5svfrrwd71mvrr8gnukjetss7foc09a9rfxgb8znltes57leeqzk51ji7mhwh2t0ioxf9x15ocx3k0amn6zy44dojwkn22ck0vfyove0u0qenzukz5dzqwwb0bshggw5rljdpdzsvkb8apujlheqldnh73vjtccbzuwtittgeh3d00vpaculdohk84vkjhuqdbs2b3q42eouh4scifgxccj71otxqu4rt3akjce82p0m2ztiocn6udqvwappknrloavb9yrhaobqv3a1r4ziybfevdn7hpdboef22hrkvtohaosfh67ab1g2pn1edhlrqm9qoacmt8ep3d74pbnszhv9gj6u03sfjlhtb99qlobdtotl4bltxsr5f915e80advvwr9m3pssbb4ezduu3cjqo558hpr3m257os8clzz61dfl8utwymbh4j == \c\9\9\t\7\k\y\8\y\x\w\m\p\8\n\4\d\2\8\1\i\3\q\w\h\9\1\m\q\g\m\m\u\z\u\e\k\y\g\w\v\c\4\2\5\5\m\o\5\4\y\t\p\n\b\l\4\l\6\e\8\4\p\j\6\r\5\1\f\o\z\5\s\v\f\r\r\w\d\7\1\m\v\r\r\8\g\n\u\k\j\e\t\s\s\7\f\o\c\0\9\a\9\r\f\x\g\b\8\z\n\l\t\e\s\5\7\l\e\e\q\z\k\5\1\j\i\7\m\h\w\h\2\t\0\i\o\x\f\9\x\1\5\o\c\x\3\k\0\a\m\n\6\z\y\4\4\d\o\j\w\k\n\2\2\c\k\0\v\f\y\o\v\e\0\u\0\q\e\n\z\u\k\z\5\d\z\q\w\w\b\0\b\s\h\g\g\w\5\r\l\j\d\p\d\z\s\v\k\b\8\a\p\u\j\l\h\e\q\l\d\n\h\7\3\v\j\t\c\c\b\z\u\w\t\i\t\t\g\e\h\3\d\0\0\v\p\a\c\u\l\d\o\h\k\8\4\v\k\j\h\u\q\d\b\s\2\b\3\q\4\2\e\o\u\h\4\s\c\i\f\g\x\c\c\j\7\1\o\t\x\q\u\4\r\t\3\a\k\j\c\e\8\2\p\0\m\2\z\t\i\o\c\n\6\u\d\q\v\w\a\p\p\k\n\r\l\o\a\v\b\9\y\r\h\a\o\b\q\v\3\a\1\r\4\z\i\y\b\f\e\v\d\n\7\h\p\d\b\o\e\f\2\2\h\r\k\v\t\o\h\a\o\s\f\h\6\7\a\b\1\g\2\p\n\1\e\d\h\l\r\q\m\9\q\o\a\c\m\t\8\e\p\3\d\7\4\p\b\n\s\z\h\v\9\g\j\6\u\0\3\s\f\j\l\h\t\b\9\9\q\l\o\b\d\t\o\t\l\4\b\l\t\x\s\r\5\f\9\1\5\e\8\0\a\d\v\v\w\r\9\m\3\p\s\s\b\b\4\e\z\d\u\u\3\c\j\q\o\5\5\8\h\p\r\3\m\2\5\7\o\s\8\c\l\z\z\6\1\d\f\l\8\u\t\w\y\m\b\h\4\j ]] 00:06:17.874 16:51:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:17.874 16:51:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:18.133 [2024-11-20 16:51:36.418075] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:18.133 [2024-11-20 16:51:36.418179] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60619 ] 00:06:18.133 [2024-11-20 16:51:36.565812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.133 [2024-11-20 16:51:36.629959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.391 [2024-11-20 16:51:36.687527] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.391  [2024-11-20T16:51:37.187Z] Copying: 512/512 [B] (average 250 kBps) 00:06:18.651 00:06:18.651 16:51:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ c99t7ky8yxwmp8n4d281i3qwh91mqgmmuzuekygwvc4255mo54ytpnbl4l6e84pj6r51foz5svfrrwd71mvrr8gnukjetss7foc09a9rfxgb8znltes57leeqzk51ji7mhwh2t0ioxf9x15ocx3k0amn6zy44dojwkn22ck0vfyove0u0qenzukz5dzqwwb0bshggw5rljdpdzsvkb8apujlheqldnh73vjtccbzuwtittgeh3d00vpaculdohk84vkjhuqdbs2b3q42eouh4scifgxccj71otxqu4rt3akjce82p0m2ztiocn6udqvwappknrloavb9yrhaobqv3a1r4ziybfevdn7hpdboef22hrkvtohaosfh67ab1g2pn1edhlrqm9qoacmt8ep3d74pbnszhv9gj6u03sfjlhtb99qlobdtotl4bltxsr5f915e80advvwr9m3pssbb4ezduu3cjqo558hpr3m257os8clzz61dfl8utwymbh4j == \c\9\9\t\7\k\y\8\y\x\w\m\p\8\n\4\d\2\8\1\i\3\q\w\h\9\1\m\q\g\m\m\u\z\u\e\k\y\g\w\v\c\4\2\5\5\m\o\5\4\y\t\p\n\b\l\4\l\6\e\8\4\p\j\6\r\5\1\f\o\z\5\s\v\f\r\r\w\d\7\1\m\v\r\r\8\g\n\u\k\j\e\t\s\s\7\f\o\c\0\9\a\9\r\f\x\g\b\8\z\n\l\t\e\s\5\7\l\e\e\q\z\k\5\1\j\i\7\m\h\w\h\2\t\0\i\o\x\f\9\x\1\5\o\c\x\3\k\0\a\m\n\6\z\y\4\4\d\o\j\w\k\n\2\2\c\k\0\v\f\y\o\v\e\0\u\0\q\e\n\z\u\k\z\5\d\z\q\w\w\b\0\b\s\h\g\g\w\5\r\l\j\d\p\d\z\s\v\k\b\8\a\p\u\j\l\h\e\q\l\d\n\h\7\3\v\j\t\c\c\b\z\u\w\t\i\t\t\g\e\h\3\d\0\0\v\p\a\c\u\l\d\o\h\k\8\4\v\k\j\h\u\q\d\b\s\2\b\3\q\4\2\e\o\u\h\4\s\c\i\f\g\x\c\c\j\7\1\o\t\x\q\u\4\r\t\3\a\k\j\c\e\8\2\p\0\m\2\z\t\i\o\c\n\6\u\d\q\v\w\a\p\p\k\n\r\l\o\a\v\b\9\y\r\h\a\o\b\q\v\3\a\1\r\4\z\i\y\b\f\e\v\d\n\7\h\p\d\b\o\e\f\2\2\h\r\k\v\t\o\h\a\o\s\f\h\6\7\a\b\1\g\2\p\n\1\e\d\h\l\r\q\m\9\q\o\a\c\m\t\8\e\p\3\d\7\4\p\b\n\s\z\h\v\9\g\j\6\u\0\3\s\f\j\l\h\t\b\9\9\q\l\o\b\d\t\o\t\l\4\b\l\t\x\s\r\5\f\9\1\5\e\8\0\a\d\v\v\w\r\9\m\3\p\s\s\b\b\4\e\z\d\u\u\3\c\j\q\o\5\5\8\h\p\r\3\m\2\5\7\o\s\8\c\l\z\z\6\1\d\f\l\8\u\t\w\y\m\b\h\4\j ]] 00:06:18.651 00:06:18.651 real 0m4.619s 00:06:18.651 user 0m2.480s 00:06:18.651 sys 0m1.157s 00:06:18.651 16:51:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.651 16:51:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:18.651 ************************************ 00:06:18.651 END TEST dd_flags_misc_forced_aio 00:06:18.651 ************************************ 00:06:18.651 16:51:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:18.651 16:51:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:18.651 16:51:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:18.651 00:06:18.651 real 0m20.780s 00:06:18.651 user 0m10.102s 00:06:18.651 sys 0m6.707s 00:06:18.651 16:51:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.651 ************************************ 00:06:18.651 END TEST spdk_dd_posix 00:06:18.651 ************************************ 00:06:18.651 16:51:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:18.651 16:51:37 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:18.651 16:51:37 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.651 16:51:37 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.651 16:51:37 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:18.651 ************************************ 00:06:18.651 START TEST spdk_dd_malloc 00:06:18.651 ************************************ 00:06:18.651 16:51:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:18.651 * Looking for test storage... 00:06:18.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:18.651 16:51:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.651 16:51:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.651 16:51:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.911 --rc genhtml_branch_coverage=1 00:06:18.911 --rc genhtml_function_coverage=1 00:06:18.911 --rc genhtml_legend=1 00:06:18.911 --rc geninfo_all_blocks=1 00:06:18.911 --rc geninfo_unexecuted_blocks=1 00:06:18.911 00:06:18.911 ' 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.911 --rc genhtml_branch_coverage=1 00:06:18.911 --rc genhtml_function_coverage=1 00:06:18.911 --rc genhtml_legend=1 00:06:18.911 --rc geninfo_all_blocks=1 00:06:18.911 --rc geninfo_unexecuted_blocks=1 00:06:18.911 00:06:18.911 ' 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.911 --rc genhtml_branch_coverage=1 00:06:18.911 --rc genhtml_function_coverage=1 00:06:18.911 --rc genhtml_legend=1 00:06:18.911 --rc geninfo_all_blocks=1 00:06:18.911 --rc geninfo_unexecuted_blocks=1 00:06:18.911 00:06:18.911 ' 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.911 --rc genhtml_branch_coverage=1 00:06:18.911 --rc genhtml_function_coverage=1 00:06:18.911 --rc genhtml_legend=1 00:06:18.911 --rc geninfo_all_blocks=1 00:06:18.911 --rc geninfo_unexecuted_blocks=1 00:06:18.911 00:06:18.911 ' 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:18.911 16:51:37 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.912 16:51:37 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:18.912 16:51:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.912 16:51:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.912 16:51:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:18.912 ************************************ 00:06:18.912 START TEST dd_malloc_copy 00:06:18.912 ************************************ 00:06:18.912 16:51:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:06:18.912 16:51:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:18.912 16:51:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:18.912 16:51:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:18.912 16:51:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:18.912 16:51:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:18.912 16:51:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:18.912 16:51:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:18.912 16:51:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:18.912 16:51:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:18.912 16:51:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:18.912 [2024-11-20 16:51:37.301589] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:18.912 [2024-11-20 16:51:37.301775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60701 ] 00:06:18.912 { 00:06:18.912 "subsystems": [ 00:06:18.912 { 00:06:18.912 "subsystem": "bdev", 00:06:18.912 "config": [ 00:06:18.912 { 00:06:18.912 "params": { 00:06:18.912 "block_size": 512, 00:06:18.912 "num_blocks": 1048576, 00:06:18.912 "name": "malloc0" 00:06:18.912 }, 00:06:18.912 "method": "bdev_malloc_create" 00:06:18.912 }, 00:06:18.912 { 00:06:18.912 "params": { 00:06:18.912 "block_size": 512, 00:06:18.912 "num_blocks": 1048576, 00:06:18.912 "name": "malloc1" 00:06:18.912 }, 00:06:18.912 "method": "bdev_malloc_create" 00:06:18.912 }, 00:06:18.912 { 00:06:18.912 "method": "bdev_wait_for_examine" 00:06:18.912 } 00:06:18.912 ] 00:06:18.912 } 00:06:18.912 ] 00:06:18.912 } 00:06:19.171 [2024-11-20 16:51:37.451622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.171 [2024-11-20 16:51:37.511618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.171 [2024-11-20 16:51:37.567477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.546  [2024-11-20T16:51:40.018Z] Copying: 208/512 [MB] (208 MBps) [2024-11-20T16:51:40.586Z] Copying: 413/512 [MB] (204 MBps) [2024-11-20T16:51:41.154Z] Copying: 512/512 [MB] (average 206 MBps) 00:06:22.618 00:06:22.618 16:51:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:22.618 16:51:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:22.618 16:51:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:22.618 16:51:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:22.618 { 00:06:22.618 "subsystems": [ 00:06:22.618 { 00:06:22.618 "subsystem": "bdev", 00:06:22.618 "config": [ 00:06:22.618 { 00:06:22.618 "params": { 00:06:22.618 "block_size": 512, 00:06:22.618 "num_blocks": 1048576, 00:06:22.618 "name": "malloc0" 00:06:22.618 }, 00:06:22.618 "method": "bdev_malloc_create" 00:06:22.618 }, 00:06:22.618 { 00:06:22.618 "params": { 00:06:22.618 "block_size": 512, 00:06:22.618 "num_blocks": 1048576, 00:06:22.618 "name": "malloc1" 00:06:22.618 }, 00:06:22.618 "method": "bdev_malloc_create" 00:06:22.618 }, 00:06:22.618 { 00:06:22.618 "method": "bdev_wait_for_examine" 00:06:22.618 } 00:06:22.618 ] 00:06:22.618 } 00:06:22.618 ] 00:06:22.618 } 00:06:22.618 [2024-11-20 16:51:41.035856] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:22.618 [2024-11-20 16:51:41.035946] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60751 ] 00:06:22.877 [2024-11-20 16:51:41.182262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.877 [2024-11-20 16:51:41.244689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.877 [2024-11-20 16:51:41.300328] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.254  [2024-11-20T16:51:43.726Z] Copying: 202/512 [MB] (202 MBps) [2024-11-20T16:51:44.293Z] Copying: 411/512 [MB] (208 MBps) [2024-11-20T16:51:44.863Z] Copying: 512/512 [MB] (average 205 MBps) 00:06:26.327 00:06:26.327 00:06:26.327 real 0m7.509s 00:06:26.327 user 0m6.505s 00:06:26.327 sys 0m0.854s 00:06:26.327 16:51:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.327 16:51:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:26.327 ************************************ 00:06:26.327 END TEST dd_malloc_copy 00:06:26.327 ************************************ 00:06:26.327 00:06:26.327 real 0m7.758s 00:06:26.327 user 0m6.656s 00:06:26.327 sys 0m0.958s 00:06:26.327 16:51:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.327 ************************************ 00:06:26.327 END TEST spdk_dd_malloc 00:06:26.327 ************************************ 00:06:26.327 16:51:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:26.327 16:51:44 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:26.327 16:51:44 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:26.327 16:51:44 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.327 16:51:44 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:26.327 ************************************ 00:06:26.327 START TEST spdk_dd_bdev_to_bdev 00:06:26.327 ************************************ 00:06:26.327 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:26.586 * Looking for test storage... 00:06:26.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:26.586 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:26.586 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:06:26.586 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:26.586 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:26.586 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.586 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.586 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:26.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.587 --rc genhtml_branch_coverage=1 00:06:26.587 --rc genhtml_function_coverage=1 00:06:26.587 --rc genhtml_legend=1 00:06:26.587 --rc geninfo_all_blocks=1 00:06:26.587 --rc geninfo_unexecuted_blocks=1 00:06:26.587 00:06:26.587 ' 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:26.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.587 --rc genhtml_branch_coverage=1 00:06:26.587 --rc genhtml_function_coverage=1 00:06:26.587 --rc genhtml_legend=1 00:06:26.587 --rc geninfo_all_blocks=1 00:06:26.587 --rc geninfo_unexecuted_blocks=1 00:06:26.587 00:06:26.587 ' 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:26.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.587 --rc genhtml_branch_coverage=1 00:06:26.587 --rc genhtml_function_coverage=1 00:06:26.587 --rc genhtml_legend=1 00:06:26.587 --rc geninfo_all_blocks=1 00:06:26.587 --rc geninfo_unexecuted_blocks=1 00:06:26.587 00:06:26.587 ' 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:26.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.587 --rc genhtml_branch_coverage=1 00:06:26.587 --rc genhtml_function_coverage=1 00:06:26.587 --rc genhtml_legend=1 00:06:26.587 --rc geninfo_all_blocks=1 00:06:26.587 --rc geninfo_unexecuted_blocks=1 00:06:26.587 00:06:26.587 ' 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:26.587 16:51:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:26.587 ************************************ 00:06:26.587 START TEST dd_inflate_file 00:06:26.587 ************************************ 00:06:26.587 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:26.587 [2024-11-20 16:51:45.065870] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:26.587 [2024-11-20 16:51:45.065980] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60867 ] 00:06:26.846 [2024-11-20 16:51:45.209452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.846 [2024-11-20 16:51:45.263387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.846 [2024-11-20 16:51:45.321383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.104  [2024-11-20T16:51:45.640Z] Copying: 64/64 [MB] (average 1488 MBps) 00:06:27.104 00:06:27.104 00:06:27.104 real 0m0.579s 00:06:27.104 user 0m0.333s 00:06:27.104 sys 0m0.312s 00:06:27.104 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.104 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:27.104 ************************************ 00:06:27.104 END TEST dd_inflate_file 00:06:27.104 ************************************ 00:06:27.104 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:27.364 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:27.364 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:27.364 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:27.364 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:27.364 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.364 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:27.364 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:27.364 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:27.364 ************************************ 00:06:27.364 START TEST dd_copy_to_out_bdev 00:06:27.364 ************************************ 00:06:27.364 16:51:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:27.364 [2024-11-20 16:51:45.695244] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:27.364 [2024-11-20 16:51:45.695346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60902 ] 00:06:27.364 { 00:06:27.364 "subsystems": [ 00:06:27.364 { 00:06:27.364 "subsystem": "bdev", 00:06:27.364 "config": [ 00:06:27.364 { 00:06:27.364 "params": { 00:06:27.364 "trtype": "pcie", 00:06:27.364 "traddr": "0000:00:10.0", 00:06:27.364 "name": "Nvme0" 00:06:27.364 }, 00:06:27.364 "method": "bdev_nvme_attach_controller" 00:06:27.364 }, 00:06:27.364 { 00:06:27.364 "params": { 00:06:27.364 "trtype": "pcie", 00:06:27.364 "traddr": "0000:00:11.0", 00:06:27.364 "name": "Nvme1" 00:06:27.364 }, 00:06:27.364 "method": "bdev_nvme_attach_controller" 00:06:27.364 }, 00:06:27.364 { 00:06:27.364 "method": "bdev_wait_for_examine" 00:06:27.364 } 00:06:27.364 ] 00:06:27.364 } 00:06:27.364 ] 00:06:27.364 } 00:06:27.364 [2024-11-20 16:51:45.836782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.364 [2024-11-20 16:51:45.888457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.623 [2024-11-20 16:51:45.947206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.002  [2024-11-20T16:51:47.538Z] Copying: 56/64 [MB] (56 MBps) [2024-11-20T16:51:47.538Z] Copying: 64/64 [MB] (average 56 MBps) 00:06:29.002 00:06:29.002 00:06:29.002 real 0m1.855s 00:06:29.002 user 0m1.627s 00:06:29.002 sys 0m1.487s 00:06:29.002 16:51:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.002 16:51:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:29.002 ************************************ 00:06:29.002 END TEST dd_copy_to_out_bdev 00:06:29.002 ************************************ 00:06:29.261 16:51:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:29.261 16:51:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:29.261 16:51:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.261 16:51:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.261 16:51:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:29.261 ************************************ 00:06:29.261 START TEST dd_offset_magic 00:06:29.261 ************************************ 00:06:29.261 16:51:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:06:29.261 16:51:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:29.261 16:51:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:29.261 16:51:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:29.261 16:51:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:29.261 16:51:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:29.261 16:51:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:29.261 16:51:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:29.261 16:51:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:29.261 [2024-11-20 16:51:47.604599] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:29.261 [2024-11-20 16:51:47.604685] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60947 ] 00:06:29.261 { 00:06:29.261 "subsystems": [ 00:06:29.261 { 00:06:29.261 "subsystem": "bdev", 00:06:29.261 "config": [ 00:06:29.261 { 00:06:29.261 "params": { 00:06:29.261 "trtype": "pcie", 00:06:29.261 "traddr": "0000:00:10.0", 00:06:29.261 "name": "Nvme0" 00:06:29.261 }, 00:06:29.261 "method": "bdev_nvme_attach_controller" 00:06:29.261 }, 00:06:29.261 { 00:06:29.261 "params": { 00:06:29.261 "trtype": "pcie", 00:06:29.261 "traddr": "0000:00:11.0", 00:06:29.261 "name": "Nvme1" 00:06:29.261 }, 00:06:29.261 "method": "bdev_nvme_attach_controller" 00:06:29.261 }, 00:06:29.261 { 00:06:29.261 "method": "bdev_wait_for_examine" 00:06:29.261 } 00:06:29.261 ] 00:06:29.261 } 00:06:29.261 ] 00:06:29.261 } 00:06:29.261 [2024-11-20 16:51:47.745974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.520 [2024-11-20 16:51:47.800940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.520 [2024-11-20 16:51:47.855917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.777  [2024-11-20T16:51:48.571Z] Copying: 65/65 [MB] (average 928 MBps) 00:06:30.035 00:06:30.035 16:51:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:30.035 16:51:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:30.035 16:51:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:30.035 16:51:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:30.035 { 00:06:30.035 "subsystems": [ 00:06:30.035 { 00:06:30.035 "subsystem": "bdev", 00:06:30.035 "config": [ 00:06:30.035 { 00:06:30.035 "params": { 00:06:30.035 "trtype": "pcie", 00:06:30.035 "traddr": "0000:00:10.0", 00:06:30.035 "name": "Nvme0" 00:06:30.035 }, 00:06:30.035 "method": "bdev_nvme_attach_controller" 00:06:30.035 }, 00:06:30.035 { 00:06:30.035 "params": { 00:06:30.035 "trtype": "pcie", 00:06:30.035 "traddr": "0000:00:11.0", 00:06:30.035 "name": "Nvme1" 00:06:30.035 }, 00:06:30.035 "method": "bdev_nvme_attach_controller" 00:06:30.035 }, 00:06:30.035 { 00:06:30.035 "method": "bdev_wait_for_examine" 00:06:30.035 } 00:06:30.035 ] 00:06:30.035 } 00:06:30.035 ] 00:06:30.035 } 00:06:30.035 [2024-11-20 16:51:48.412163] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:30.035 [2024-11-20 16:51:48.412359] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60967 ] 00:06:30.035 [2024-11-20 16:51:48.568388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.293 [2024-11-20 16:51:48.623028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.293 [2024-11-20 16:51:48.678881] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.552  [2024-11-20T16:51:49.088Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:30.552 00:06:30.552 16:51:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:30.552 16:51:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:30.552 16:51:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:30.552 16:51:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:30.552 16:51:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:30.552 16:51:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:30.552 16:51:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:30.810 [2024-11-20 16:51:49.136872] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:30.810 [2024-11-20 16:51:49.137046] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60978 ] 00:06:30.810 { 00:06:30.810 "subsystems": [ 00:06:30.810 { 00:06:30.810 "subsystem": "bdev", 00:06:30.810 "config": [ 00:06:30.810 { 00:06:30.810 "params": { 00:06:30.810 "trtype": "pcie", 00:06:30.810 "traddr": "0000:00:10.0", 00:06:30.810 "name": "Nvme0" 00:06:30.810 }, 00:06:30.810 "method": "bdev_nvme_attach_controller" 00:06:30.810 }, 00:06:30.810 { 00:06:30.810 "params": { 00:06:30.810 "trtype": "pcie", 00:06:30.810 "traddr": "0000:00:11.0", 00:06:30.810 "name": "Nvme1" 00:06:30.810 }, 00:06:30.810 "method": "bdev_nvme_attach_controller" 00:06:30.810 }, 00:06:30.810 { 00:06:30.810 "method": "bdev_wait_for_examine" 00:06:30.810 } 00:06:30.810 ] 00:06:30.810 } 00:06:30.810 ] 00:06:30.810 } 00:06:30.810 [2024-11-20 16:51:49.287724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.069 [2024-11-20 16:51:49.349505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.069 [2024-11-20 16:51:49.405588] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.328  [2024-11-20T16:51:50.123Z] Copying: 65/65 [MB] (average 915 MBps) 00:06:31.587 00:06:31.587 16:51:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:31.587 16:51:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:31.587 16:51:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:31.587 16:51:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:31.587 [2024-11-20 16:51:49.945797] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:31.587 [2024-11-20 16:51:49.945930] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60998 ] 00:06:31.587 { 00:06:31.587 "subsystems": [ 00:06:31.587 { 00:06:31.587 "subsystem": "bdev", 00:06:31.587 "config": [ 00:06:31.587 { 00:06:31.587 "params": { 00:06:31.587 "trtype": "pcie", 00:06:31.587 "traddr": "0000:00:10.0", 00:06:31.587 "name": "Nvme0" 00:06:31.587 }, 00:06:31.587 "method": "bdev_nvme_attach_controller" 00:06:31.587 }, 00:06:31.587 { 00:06:31.587 "params": { 00:06:31.587 "trtype": "pcie", 00:06:31.587 "traddr": "0000:00:11.0", 00:06:31.587 "name": "Nvme1" 00:06:31.587 }, 00:06:31.587 "method": "bdev_nvme_attach_controller" 00:06:31.587 }, 00:06:31.587 { 00:06:31.587 "method": "bdev_wait_for_examine" 00:06:31.587 } 00:06:31.587 ] 00:06:31.587 } 00:06:31.587 ] 00:06:31.587 } 00:06:31.587 [2024-11-20 16:51:50.093343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.846 [2024-11-20 16:51:50.154512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.846 [2024-11-20 16:51:50.214493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.104  [2024-11-20T16:51:50.640Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:32.104 00:06:32.104 16:51:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:32.104 16:51:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:32.104 00:06:32.104 real 0m3.047s 00:06:32.104 user 0m2.180s 00:06:32.104 sys 0m0.961s 00:06:32.104 16:51:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.104 ************************************ 00:06:32.104 END TEST dd_offset_magic 00:06:32.104 16:51:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:32.104 ************************************ 00:06:32.363 16:51:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:32.363 16:51:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:32.363 16:51:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:32.363 16:51:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:32.363 16:51:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:32.363 16:51:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:32.363 16:51:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:32.363 16:51:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:32.363 16:51:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:32.363 16:51:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:32.364 16:51:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:32.364 [2024-11-20 16:51:50.720282] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:32.364 [2024-11-20 16:51:50.720436] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61035 ] 00:06:32.364 { 00:06:32.364 "subsystems": [ 00:06:32.364 { 00:06:32.364 "subsystem": "bdev", 00:06:32.364 "config": [ 00:06:32.364 { 00:06:32.364 "params": { 00:06:32.364 "trtype": "pcie", 00:06:32.364 "traddr": "0000:00:10.0", 00:06:32.364 "name": "Nvme0" 00:06:32.364 }, 00:06:32.364 "method": "bdev_nvme_attach_controller" 00:06:32.364 }, 00:06:32.364 { 00:06:32.364 "params": { 00:06:32.364 "trtype": "pcie", 00:06:32.364 "traddr": "0000:00:11.0", 00:06:32.364 "name": "Nvme1" 00:06:32.364 }, 00:06:32.364 "method": "bdev_nvme_attach_controller" 00:06:32.364 }, 00:06:32.364 { 00:06:32.364 "method": "bdev_wait_for_examine" 00:06:32.364 } 00:06:32.364 ] 00:06:32.364 } 00:06:32.364 ] 00:06:32.364 } 00:06:32.364 [2024-11-20 16:51:50.871572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.680 [2024-11-20 16:51:50.927680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.680 [2024-11-20 16:51:50.983392] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.680  [2024-11-20T16:51:51.474Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:06:32.938 00:06:32.938 16:51:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:32.938 16:51:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:32.938 16:51:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:32.938 16:51:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:32.938 16:51:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:32.938 16:51:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:32.938 16:51:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:32.938 16:51:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:32.939 16:51:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:32.939 16:51:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:32.939 { 00:06:32.939 "subsystems": [ 00:06:32.939 { 00:06:32.939 "subsystem": "bdev", 00:06:32.939 "config": [ 00:06:32.939 { 00:06:32.939 "params": { 00:06:32.939 "trtype": "pcie", 00:06:32.939 "traddr": "0000:00:10.0", 00:06:32.939 "name": "Nvme0" 00:06:32.939 }, 00:06:32.939 "method": "bdev_nvme_attach_controller" 00:06:32.939 }, 00:06:32.939 { 00:06:32.939 "params": { 00:06:32.939 "trtype": "pcie", 00:06:32.939 "traddr": "0000:00:11.0", 00:06:32.939 "name": "Nvme1" 00:06:32.939 }, 00:06:32.939 "method": "bdev_nvme_attach_controller" 00:06:32.939 }, 00:06:32.939 { 00:06:32.939 "method": "bdev_wait_for_examine" 00:06:32.939 } 00:06:32.939 ] 00:06:32.939 } 00:06:32.939 ] 00:06:32.939 } 00:06:32.939 [2024-11-20 16:51:51.418648] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:32.939 [2024-11-20 16:51:51.418761] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61056 ] 00:06:33.197 [2024-11-20 16:51:51.565801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.197 [2024-11-20 16:51:51.623532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.197 [2024-11-20 16:51:51.678964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.455  [2024-11-20T16:51:52.249Z] Copying: 5120/5120 [kB] (average 714 MBps) 00:06:33.713 00:06:33.713 16:51:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:33.713 00:06:33.713 real 0m7.248s 00:06:33.713 user 0m5.327s 00:06:33.713 sys 0m3.469s 00:06:33.713 16:51:52 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.713 ************************************ 00:06:33.713 END TEST spdk_dd_bdev_to_bdev 00:06:33.713 ************************************ 00:06:33.713 16:51:52 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:33.713 16:51:52 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:33.713 16:51:52 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:33.713 16:51:52 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.713 16:51:52 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.713 16:51:52 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:33.713 ************************************ 00:06:33.713 START TEST spdk_dd_uring 00:06:33.713 ************************************ 00:06:33.713 16:51:52 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:33.713 * Looking for test storage... 00:06:33.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:33.713 16:51:52 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:33.713 16:51:52 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:33.713 16:51:52 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:33.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.973 --rc genhtml_branch_coverage=1 00:06:33.973 --rc genhtml_function_coverage=1 00:06:33.973 --rc genhtml_legend=1 00:06:33.973 --rc geninfo_all_blocks=1 00:06:33.973 --rc geninfo_unexecuted_blocks=1 00:06:33.973 00:06:33.973 ' 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:33.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.973 --rc genhtml_branch_coverage=1 00:06:33.973 --rc genhtml_function_coverage=1 00:06:33.973 --rc genhtml_legend=1 00:06:33.973 --rc geninfo_all_blocks=1 00:06:33.973 --rc geninfo_unexecuted_blocks=1 00:06:33.973 00:06:33.973 ' 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:33.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.973 --rc genhtml_branch_coverage=1 00:06:33.973 --rc genhtml_function_coverage=1 00:06:33.973 --rc genhtml_legend=1 00:06:33.973 --rc geninfo_all_blocks=1 00:06:33.973 --rc geninfo_unexecuted_blocks=1 00:06:33.973 00:06:33.973 ' 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:33.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.973 --rc genhtml_branch_coverage=1 00:06:33.973 --rc genhtml_function_coverage=1 00:06:33.973 --rc genhtml_legend=1 00:06:33.973 --rc geninfo_all_blocks=1 00:06:33.973 --rc geninfo_unexecuted_blocks=1 00:06:33.973 00:06:33.973 ' 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:33.973 ************************************ 00:06:33.973 START TEST dd_uring_copy 00:06:33.973 ************************************ 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:33.973 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:33.974 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=hahi41evdaj21556joig9avtld6zyzu6jmcqpiifrqke45ly8i07x7f733hmqoa3twvgydlezvk0rw2a1p5zct6iuocmkn0e9snya91kao0a68zwht5y6jn0n7kslf5lzhqlnmevjwusfbfofq10lzq35561ocx582tt2ghtezxkdmox41ror8rahd60r2negmeozericcyqb4woz79tamr9h48djjf5ue4kdezvyanrsiwothf9wrw4vasmd0168xxzyskx1obc82kslsc247h01z25175jxkgdzq9ozhdxij4j8nd6alkgshf8c7pzw8j062169grj75sdqmnccalzx3db1kiw01rqnbh056ytnfros5v1bwjsnbqakqezaarfax263bgv6d0dl85yhk2qgugxqxak8tmtojbk5h9knmhk6ccr9dc43qppnn332cwarzs0imb7lr7o1yfal6v1y7lpf3clh7dfj55rzco6ojl4hgf9147m78gx9dylhprsarc18u21orol4kjo34cye8e6w7pg7ynv18v8hvvdkbch6kc96l7213pdzvl97qx4cul7x525xzvv7tqgubakkzi88upf2fn1vq9yjdvgso7q3vdr8qgg94xi9v8zy6cs5tbsxfz8o0sa85m5ar0obezl7n543lpj2wjf597enm706drtziqxwufw0y6dfwuc8em9vd93hv1dmrvz24owkmu6x5x1b3phdmafbznq7q34dzwojmorstdlrfzepgfqpjd3rq1k5zourrkal27jvvcty5agoidhdmbktgn7st9ep85ye0gl9ox69cz992yqgaewv4db6m0aq8phzqfybdatbur0vuh69p45xftoypwsv2i3gq3zify62she5nxn3le5s886uk6pbfi9awdh7eot4gpo4xdx1pp6omql81mlt7mwbpf5kx4nx7ogp1ssdxq42nwnmerh8icmuymafdxgvu3tl1lsx47fv1b4u6d43whb1idzrmxczvnz 00:06:33.974 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo hahi41evdaj21556joig9avtld6zyzu6jmcqpiifrqke45ly8i07x7f733hmqoa3twvgydlezvk0rw2a1p5zct6iuocmkn0e9snya91kao0a68zwht5y6jn0n7kslf5lzhqlnmevjwusfbfofq10lzq35561ocx582tt2ghtezxkdmox41ror8rahd60r2negmeozericcyqb4woz79tamr9h48djjf5ue4kdezvyanrsiwothf9wrw4vasmd0168xxzyskx1obc82kslsc247h01z25175jxkgdzq9ozhdxij4j8nd6alkgshf8c7pzw8j062169grj75sdqmnccalzx3db1kiw01rqnbh056ytnfros5v1bwjsnbqakqezaarfax263bgv6d0dl85yhk2qgugxqxak8tmtojbk5h9knmhk6ccr9dc43qppnn332cwarzs0imb7lr7o1yfal6v1y7lpf3clh7dfj55rzco6ojl4hgf9147m78gx9dylhprsarc18u21orol4kjo34cye8e6w7pg7ynv18v8hvvdkbch6kc96l7213pdzvl97qx4cul7x525xzvv7tqgubakkzi88upf2fn1vq9yjdvgso7q3vdr8qgg94xi9v8zy6cs5tbsxfz8o0sa85m5ar0obezl7n543lpj2wjf597enm706drtziqxwufw0y6dfwuc8em9vd93hv1dmrvz24owkmu6x5x1b3phdmafbznq7q34dzwojmorstdlrfzepgfqpjd3rq1k5zourrkal27jvvcty5agoidhdmbktgn7st9ep85ye0gl9ox69cz992yqgaewv4db6m0aq8phzqfybdatbur0vuh69p45xftoypwsv2i3gq3zify62she5nxn3le5s886uk6pbfi9awdh7eot4gpo4xdx1pp6omql81mlt7mwbpf5kx4nx7ogp1ssdxq42nwnmerh8icmuymafdxgvu3tl1lsx47fv1b4u6d43whb1idzrmxczvnz 00:06:33.974 16:51:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:33.974 [2024-11-20 16:51:52.399387] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:33.974 [2024-11-20 16:51:52.399484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61133 ] 00:06:34.232 [2024-11-20 16:51:52.547195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.232 [2024-11-20 16:51:52.605383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.232 [2024-11-20 16:51:52.659833] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.799  [2024-11-20T16:51:53.903Z] Copying: 511/511 [MB] (average 1166 MBps) 00:06:35.367 00:06:35.367 16:51:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:35.367 16:51:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:06:35.367 16:51:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:35.367 16:51:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:35.367 [2024-11-20 16:51:53.776428] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:35.367 [2024-11-20 16:51:53.776529] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61150 ] 00:06:35.367 { 00:06:35.367 "subsystems": [ 00:06:35.367 { 00:06:35.367 "subsystem": "bdev", 00:06:35.367 "config": [ 00:06:35.367 { 00:06:35.367 "params": { 00:06:35.367 "block_size": 512, 00:06:35.367 "num_blocks": 1048576, 00:06:35.367 "name": "malloc0" 00:06:35.367 }, 00:06:35.367 "method": "bdev_malloc_create" 00:06:35.367 }, 00:06:35.367 { 00:06:35.367 "params": { 00:06:35.367 "filename": "/dev/zram1", 00:06:35.367 "name": "uring0" 00:06:35.367 }, 00:06:35.367 "method": "bdev_uring_create" 00:06:35.367 }, 00:06:35.367 { 00:06:35.367 "method": "bdev_wait_for_examine" 00:06:35.367 } 00:06:35.367 ] 00:06:35.367 } 00:06:35.367 ] 00:06:35.367 } 00:06:35.627 [2024-11-20 16:51:53.922574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.627 [2024-11-20 16:51:53.982488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.627 [2024-11-20 16:51:54.039577] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.005  [2024-11-20T16:51:56.477Z] Copying: 229/512 [MB] (229 MBps) [2024-11-20T16:51:56.478Z] Copying: 464/512 [MB] (235 MBps) [2024-11-20T16:51:57.045Z] Copying: 512/512 [MB] (average 232 MBps) 00:06:38.509 00:06:38.509 16:51:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:38.509 16:51:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:06:38.509 16:51:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:38.510 16:51:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:38.510 [2024-11-20 16:51:56.899313] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:38.510 [2024-11-20 16:51:56.899437] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61194 ] 00:06:38.510 { 00:06:38.510 "subsystems": [ 00:06:38.510 { 00:06:38.510 "subsystem": "bdev", 00:06:38.510 "config": [ 00:06:38.510 { 00:06:38.510 "params": { 00:06:38.510 "block_size": 512, 00:06:38.510 "num_blocks": 1048576, 00:06:38.510 "name": "malloc0" 00:06:38.510 }, 00:06:38.510 "method": "bdev_malloc_create" 00:06:38.510 }, 00:06:38.510 { 00:06:38.510 "params": { 00:06:38.510 "filename": "/dev/zram1", 00:06:38.510 "name": "uring0" 00:06:38.510 }, 00:06:38.510 "method": "bdev_uring_create" 00:06:38.510 }, 00:06:38.510 { 00:06:38.510 "method": "bdev_wait_for_examine" 00:06:38.510 } 00:06:38.510 ] 00:06:38.510 } 00:06:38.510 ] 00:06:38.510 } 00:06:38.769 [2024-11-20 16:51:57.044942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.769 [2024-11-20 16:51:57.096278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.769 [2024-11-20 16:51:57.152292] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.147  [2024-11-20T16:51:59.620Z] Copying: 180/512 [MB] (180 MBps) [2024-11-20T16:52:00.556Z] Copying: 339/512 [MB] (158 MBps) [2024-11-20T16:52:00.556Z] Copying: 486/512 [MB] (147 MBps) [2024-11-20T16:52:01.136Z] Copying: 512/512 [MB] (average 162 MBps) 00:06:42.600 00:06:42.600 16:52:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:42.601 16:52:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ hahi41evdaj21556joig9avtld6zyzu6jmcqpiifrqke45ly8i07x7f733hmqoa3twvgydlezvk0rw2a1p5zct6iuocmkn0e9snya91kao0a68zwht5y6jn0n7kslf5lzhqlnmevjwusfbfofq10lzq35561ocx582tt2ghtezxkdmox41ror8rahd60r2negmeozericcyqb4woz79tamr9h48djjf5ue4kdezvyanrsiwothf9wrw4vasmd0168xxzyskx1obc82kslsc247h01z25175jxkgdzq9ozhdxij4j8nd6alkgshf8c7pzw8j062169grj75sdqmnccalzx3db1kiw01rqnbh056ytnfros5v1bwjsnbqakqezaarfax263bgv6d0dl85yhk2qgugxqxak8tmtojbk5h9knmhk6ccr9dc43qppnn332cwarzs0imb7lr7o1yfal6v1y7lpf3clh7dfj55rzco6ojl4hgf9147m78gx9dylhprsarc18u21orol4kjo34cye8e6w7pg7ynv18v8hvvdkbch6kc96l7213pdzvl97qx4cul7x525xzvv7tqgubakkzi88upf2fn1vq9yjdvgso7q3vdr8qgg94xi9v8zy6cs5tbsxfz8o0sa85m5ar0obezl7n543lpj2wjf597enm706drtziqxwufw0y6dfwuc8em9vd93hv1dmrvz24owkmu6x5x1b3phdmafbznq7q34dzwojmorstdlrfzepgfqpjd3rq1k5zourrkal27jvvcty5agoidhdmbktgn7st9ep85ye0gl9ox69cz992yqgaewv4db6m0aq8phzqfybdatbur0vuh69p45xftoypwsv2i3gq3zify62she5nxn3le5s886uk6pbfi9awdh7eot4gpo4xdx1pp6omql81mlt7mwbpf5kx4nx7ogp1ssdxq42nwnmerh8icmuymafdxgvu3tl1lsx47fv1b4u6d43whb1idzrmxczvnz == \h\a\h\i\4\1\e\v\d\a\j\2\1\5\5\6\j\o\i\g\9\a\v\t\l\d\6\z\y\z\u\6\j\m\c\q\p\i\i\f\r\q\k\e\4\5\l\y\8\i\0\7\x\7\f\7\3\3\h\m\q\o\a\3\t\w\v\g\y\d\l\e\z\v\k\0\r\w\2\a\1\p\5\z\c\t\6\i\u\o\c\m\k\n\0\e\9\s\n\y\a\9\1\k\a\o\0\a\6\8\z\w\h\t\5\y\6\j\n\0\n\7\k\s\l\f\5\l\z\h\q\l\n\m\e\v\j\w\u\s\f\b\f\o\f\q\1\0\l\z\q\3\5\5\6\1\o\c\x\5\8\2\t\t\2\g\h\t\e\z\x\k\d\m\o\x\4\1\r\o\r\8\r\a\h\d\6\0\r\2\n\e\g\m\e\o\z\e\r\i\c\c\y\q\b\4\w\o\z\7\9\t\a\m\r\9\h\4\8\d\j\j\f\5\u\e\4\k\d\e\z\v\y\a\n\r\s\i\w\o\t\h\f\9\w\r\w\4\v\a\s\m\d\0\1\6\8\x\x\z\y\s\k\x\1\o\b\c\8\2\k\s\l\s\c\2\4\7\h\0\1\z\2\5\1\7\5\j\x\k\g\d\z\q\9\o\z\h\d\x\i\j\4\j\8\n\d\6\a\l\k\g\s\h\f\8\c\7\p\z\w\8\j\0\6\2\1\6\9\g\r\j\7\5\s\d\q\m\n\c\c\a\l\z\x\3\d\b\1\k\i\w\0\1\r\q\n\b\h\0\5\6\y\t\n\f\r\o\s\5\v\1\b\w\j\s\n\b\q\a\k\q\e\z\a\a\r\f\a\x\2\6\3\b\g\v\6\d\0\d\l\8\5\y\h\k\2\q\g\u\g\x\q\x\a\k\8\t\m\t\o\j\b\k\5\h\9\k\n\m\h\k\6\c\c\r\9\d\c\4\3\q\p\p\n\n\3\3\2\c\w\a\r\z\s\0\i\m\b\7\l\r\7\o\1\y\f\a\l\6\v\1\y\7\l\p\f\3\c\l\h\7\d\f\j\5\5\r\z\c\o\6\o\j\l\4\h\g\f\9\1\4\7\m\7\8\g\x\9\d\y\l\h\p\r\s\a\r\c\1\8\u\2\1\o\r\o\l\4\k\j\o\3\4\c\y\e\8\e\6\w\7\p\g\7\y\n\v\1\8\v\8\h\v\v\d\k\b\c\h\6\k\c\9\6\l\7\2\1\3\p\d\z\v\l\9\7\q\x\4\c\u\l\7\x\5\2\5\x\z\v\v\7\t\q\g\u\b\a\k\k\z\i\8\8\u\p\f\2\f\n\1\v\q\9\y\j\d\v\g\s\o\7\q\3\v\d\r\8\q\g\g\9\4\x\i\9\v\8\z\y\6\c\s\5\t\b\s\x\f\z\8\o\0\s\a\8\5\m\5\a\r\0\o\b\e\z\l\7\n\5\4\3\l\p\j\2\w\j\f\5\9\7\e\n\m\7\0\6\d\r\t\z\i\q\x\w\u\f\w\0\y\6\d\f\w\u\c\8\e\m\9\v\d\9\3\h\v\1\d\m\r\v\z\2\4\o\w\k\m\u\6\x\5\x\1\b\3\p\h\d\m\a\f\b\z\n\q\7\q\3\4\d\z\w\o\j\m\o\r\s\t\d\l\r\f\z\e\p\g\f\q\p\j\d\3\r\q\1\k\5\z\o\u\r\r\k\a\l\2\7\j\v\v\c\t\y\5\a\g\o\i\d\h\d\m\b\k\t\g\n\7\s\t\9\e\p\8\5\y\e\0\g\l\9\o\x\6\9\c\z\9\9\2\y\q\g\a\e\w\v\4\d\b\6\m\0\a\q\8\p\h\z\q\f\y\b\d\a\t\b\u\r\0\v\u\h\6\9\p\4\5\x\f\t\o\y\p\w\s\v\2\i\3\g\q\3\z\i\f\y\6\2\s\h\e\5\n\x\n\3\l\e\5\s\8\8\6\u\k\6\p\b\f\i\9\a\w\d\h\7\e\o\t\4\g\p\o\4\x\d\x\1\p\p\6\o\m\q\l\8\1\m\l\t\7\m\w\b\p\f\5\k\x\4\n\x\7\o\g\p\1\s\s\d\x\q\4\2\n\w\n\m\e\r\h\8\i\c\m\u\y\m\a\f\d\x\g\v\u\3\t\l\1\l\s\x\4\7\f\v\1\b\4\u\6\d\4\3\w\h\b\1\i\d\z\r\m\x\c\z\v\n\z ]] 00:06:42.601 16:52:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:42.601 16:52:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ hahi41evdaj21556joig9avtld6zyzu6jmcqpiifrqke45ly8i07x7f733hmqoa3twvgydlezvk0rw2a1p5zct6iuocmkn0e9snya91kao0a68zwht5y6jn0n7kslf5lzhqlnmevjwusfbfofq10lzq35561ocx582tt2ghtezxkdmox41ror8rahd60r2negmeozericcyqb4woz79tamr9h48djjf5ue4kdezvyanrsiwothf9wrw4vasmd0168xxzyskx1obc82kslsc247h01z25175jxkgdzq9ozhdxij4j8nd6alkgshf8c7pzw8j062169grj75sdqmnccalzx3db1kiw01rqnbh056ytnfros5v1bwjsnbqakqezaarfax263bgv6d0dl85yhk2qgugxqxak8tmtojbk5h9knmhk6ccr9dc43qppnn332cwarzs0imb7lr7o1yfal6v1y7lpf3clh7dfj55rzco6ojl4hgf9147m78gx9dylhprsarc18u21orol4kjo34cye8e6w7pg7ynv18v8hvvdkbch6kc96l7213pdzvl97qx4cul7x525xzvv7tqgubakkzi88upf2fn1vq9yjdvgso7q3vdr8qgg94xi9v8zy6cs5tbsxfz8o0sa85m5ar0obezl7n543lpj2wjf597enm706drtziqxwufw0y6dfwuc8em9vd93hv1dmrvz24owkmu6x5x1b3phdmafbznq7q34dzwojmorstdlrfzepgfqpjd3rq1k5zourrkal27jvvcty5agoidhdmbktgn7st9ep85ye0gl9ox69cz992yqgaewv4db6m0aq8phzqfybdatbur0vuh69p45xftoypwsv2i3gq3zify62she5nxn3le5s886uk6pbfi9awdh7eot4gpo4xdx1pp6omql81mlt7mwbpf5kx4nx7ogp1ssdxq42nwnmerh8icmuymafdxgvu3tl1lsx47fv1b4u6d43whb1idzrmxczvnz == \h\a\h\i\4\1\e\v\d\a\j\2\1\5\5\6\j\o\i\g\9\a\v\t\l\d\6\z\y\z\u\6\j\m\c\q\p\i\i\f\r\q\k\e\4\5\l\y\8\i\0\7\x\7\f\7\3\3\h\m\q\o\a\3\t\w\v\g\y\d\l\e\z\v\k\0\r\w\2\a\1\p\5\z\c\t\6\i\u\o\c\m\k\n\0\e\9\s\n\y\a\9\1\k\a\o\0\a\6\8\z\w\h\t\5\y\6\j\n\0\n\7\k\s\l\f\5\l\z\h\q\l\n\m\e\v\j\w\u\s\f\b\f\o\f\q\1\0\l\z\q\3\5\5\6\1\o\c\x\5\8\2\t\t\2\g\h\t\e\z\x\k\d\m\o\x\4\1\r\o\r\8\r\a\h\d\6\0\r\2\n\e\g\m\e\o\z\e\r\i\c\c\y\q\b\4\w\o\z\7\9\t\a\m\r\9\h\4\8\d\j\j\f\5\u\e\4\k\d\e\z\v\y\a\n\r\s\i\w\o\t\h\f\9\w\r\w\4\v\a\s\m\d\0\1\6\8\x\x\z\y\s\k\x\1\o\b\c\8\2\k\s\l\s\c\2\4\7\h\0\1\z\2\5\1\7\5\j\x\k\g\d\z\q\9\o\z\h\d\x\i\j\4\j\8\n\d\6\a\l\k\g\s\h\f\8\c\7\p\z\w\8\j\0\6\2\1\6\9\g\r\j\7\5\s\d\q\m\n\c\c\a\l\z\x\3\d\b\1\k\i\w\0\1\r\q\n\b\h\0\5\6\y\t\n\f\r\o\s\5\v\1\b\w\j\s\n\b\q\a\k\q\e\z\a\a\r\f\a\x\2\6\3\b\g\v\6\d\0\d\l\8\5\y\h\k\2\q\g\u\g\x\q\x\a\k\8\t\m\t\o\j\b\k\5\h\9\k\n\m\h\k\6\c\c\r\9\d\c\4\3\q\p\p\n\n\3\3\2\c\w\a\r\z\s\0\i\m\b\7\l\r\7\o\1\y\f\a\l\6\v\1\y\7\l\p\f\3\c\l\h\7\d\f\j\5\5\r\z\c\o\6\o\j\l\4\h\g\f\9\1\4\7\m\7\8\g\x\9\d\y\l\h\p\r\s\a\r\c\1\8\u\2\1\o\r\o\l\4\k\j\o\3\4\c\y\e\8\e\6\w\7\p\g\7\y\n\v\1\8\v\8\h\v\v\d\k\b\c\h\6\k\c\9\6\l\7\2\1\3\p\d\z\v\l\9\7\q\x\4\c\u\l\7\x\5\2\5\x\z\v\v\7\t\q\g\u\b\a\k\k\z\i\8\8\u\p\f\2\f\n\1\v\q\9\y\j\d\v\g\s\o\7\q\3\v\d\r\8\q\g\g\9\4\x\i\9\v\8\z\y\6\c\s\5\t\b\s\x\f\z\8\o\0\s\a\8\5\m\5\a\r\0\o\b\e\z\l\7\n\5\4\3\l\p\j\2\w\j\f\5\9\7\e\n\m\7\0\6\d\r\t\z\i\q\x\w\u\f\w\0\y\6\d\f\w\u\c\8\e\m\9\v\d\9\3\h\v\1\d\m\r\v\z\2\4\o\w\k\m\u\6\x\5\x\1\b\3\p\h\d\m\a\f\b\z\n\q\7\q\3\4\d\z\w\o\j\m\o\r\s\t\d\l\r\f\z\e\p\g\f\q\p\j\d\3\r\q\1\k\5\z\o\u\r\r\k\a\l\2\7\j\v\v\c\t\y\5\a\g\o\i\d\h\d\m\b\k\t\g\n\7\s\t\9\e\p\8\5\y\e\0\g\l\9\o\x\6\9\c\z\9\9\2\y\q\g\a\e\w\v\4\d\b\6\m\0\a\q\8\p\h\z\q\f\y\b\d\a\t\b\u\r\0\v\u\h\6\9\p\4\5\x\f\t\o\y\p\w\s\v\2\i\3\g\q\3\z\i\f\y\6\2\s\h\e\5\n\x\n\3\l\e\5\s\8\8\6\u\k\6\p\b\f\i\9\a\w\d\h\7\e\o\t\4\g\p\o\4\x\d\x\1\p\p\6\o\m\q\l\8\1\m\l\t\7\m\w\b\p\f\5\k\x\4\n\x\7\o\g\p\1\s\s\d\x\q\4\2\n\w\n\m\e\r\h\8\i\c\m\u\y\m\a\f\d\x\g\v\u\3\t\l\1\l\s\x\4\7\f\v\1\b\4\u\6\d\4\3\w\h\b\1\i\d\z\r\m\x\c\z\v\n\z ]] 00:06:42.601 16:52:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:42.860 16:52:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:42.860 16:52:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:06:42.860 16:52:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:42.860 16:52:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:42.860 [2024-11-20 16:52:01.359071] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:42.860 [2024-11-20 16:52:01.359230] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61271 ] 00:06:42.860 { 00:06:42.860 "subsystems": [ 00:06:42.860 { 00:06:42.860 "subsystem": "bdev", 00:06:42.860 "config": [ 00:06:42.860 { 00:06:42.860 "params": { 00:06:42.860 "block_size": 512, 00:06:42.860 "num_blocks": 1048576, 00:06:42.860 "name": "malloc0" 00:06:42.860 }, 00:06:42.860 "method": "bdev_malloc_create" 00:06:42.860 }, 00:06:42.860 { 00:06:42.860 "params": { 00:06:42.860 "filename": "/dev/zram1", 00:06:42.860 "name": "uring0" 00:06:42.860 }, 00:06:42.860 "method": "bdev_uring_create" 00:06:42.860 }, 00:06:42.860 { 00:06:42.860 "method": "bdev_wait_for_examine" 00:06:42.860 } 00:06:42.860 ] 00:06:42.860 } 00:06:42.860 ] 00:06:42.860 } 00:06:43.119 [2024-11-20 16:52:01.506542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.119 [2024-11-20 16:52:01.568332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.119 [2024-11-20 16:52:01.624432] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.497  [2024-11-20T16:52:03.970Z] Copying: 154/512 [MB] (154 MBps) [2024-11-20T16:52:04.908Z] Copying: 308/512 [MB] (154 MBps) [2024-11-20T16:52:05.167Z] Copying: 466/512 [MB] (157 MBps) [2024-11-20T16:52:05.736Z] Copying: 512/512 [MB] (average 155 MBps) 00:06:47.200 00:06:47.200 16:52:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:06:47.200 16:52:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:06:47.200 16:52:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:47.200 16:52:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:06:47.200 16:52:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:06:47.200 16:52:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:47.200 16:52:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:47.200 16:52:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:47.200 [2024-11-20 16:52:05.568551] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:47.201 [2024-11-20 16:52:05.569202] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61329 ] 00:06:47.201 { 00:06:47.201 "subsystems": [ 00:06:47.201 { 00:06:47.201 "subsystem": "bdev", 00:06:47.201 "config": [ 00:06:47.201 { 00:06:47.201 "params": { 00:06:47.201 "block_size": 512, 00:06:47.201 "num_blocks": 1048576, 00:06:47.201 "name": "malloc0" 00:06:47.201 }, 00:06:47.201 "method": "bdev_malloc_create" 00:06:47.201 }, 00:06:47.201 { 00:06:47.201 "params": { 00:06:47.201 "filename": "/dev/zram1", 00:06:47.201 "name": "uring0" 00:06:47.201 }, 00:06:47.201 "method": "bdev_uring_create" 00:06:47.201 }, 00:06:47.201 { 00:06:47.201 "params": { 00:06:47.201 "name": "uring0" 00:06:47.201 }, 00:06:47.201 "method": "bdev_uring_delete" 00:06:47.201 }, 00:06:47.201 { 00:06:47.201 "method": "bdev_wait_for_examine" 00:06:47.201 } 00:06:47.201 ] 00:06:47.201 } 00:06:47.201 ] 00:06:47.201 } 00:06:47.201 [2024-11-20 16:52:05.719033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.460 [2024-11-20 16:52:05.782513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.460 [2024-11-20 16:52:05.840380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.719  [2024-11-20T16:52:06.514Z] Copying: 0/0 [B] (average 0 Bps) 00:06:47.978 00:06:47.978 16:52:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:47.978 16:52:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:06:47.978 16:52:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:47.978 16:52:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.978 16:52:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:06:47.978 16:52:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:06:47.978 16:52:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:47.978 16:52:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:47.978 16:52:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.978 16:52:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.978 16:52:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.978 16:52:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.978 16:52:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.978 16:52:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.978 16:52:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:47.978 16:52:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:48.238 [2024-11-20 16:52:06.522717] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:48.238 [2024-11-20 16:52:06.522821] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61356 ] 00:06:48.238 { 00:06:48.238 "subsystems": [ 00:06:48.238 { 00:06:48.238 "subsystem": "bdev", 00:06:48.238 "config": [ 00:06:48.238 { 00:06:48.238 "params": { 00:06:48.238 "block_size": 512, 00:06:48.238 "num_blocks": 1048576, 00:06:48.238 "name": "malloc0" 00:06:48.238 }, 00:06:48.238 "method": "bdev_malloc_create" 00:06:48.238 }, 00:06:48.238 { 00:06:48.238 "params": { 00:06:48.238 "filename": "/dev/zram1", 00:06:48.238 "name": "uring0" 00:06:48.238 }, 00:06:48.238 "method": "bdev_uring_create" 00:06:48.238 }, 00:06:48.238 { 00:06:48.238 "params": { 00:06:48.238 "name": "uring0" 00:06:48.238 }, 00:06:48.238 "method": "bdev_uring_delete" 00:06:48.238 }, 00:06:48.238 { 00:06:48.238 "method": "bdev_wait_for_examine" 00:06:48.238 } 00:06:48.238 ] 00:06:48.238 } 00:06:48.238 ] 00:06:48.238 } 00:06:48.238 [2024-11-20 16:52:06.674057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.238 [2024-11-20 16:52:06.743351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.497 [2024-11-20 16:52:06.803830] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.497 [2024-11-20 16:52:07.026595] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:06:48.497 [2024-11-20 16:52:07.026688] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:06:48.497 [2024-11-20 16:52:07.026700] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:06:48.497 [2024-11-20 16:52:07.026710] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:49.065 [2024-11-20 16:52:07.359533] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:49.065 16:52:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:06:49.065 16:52:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:49.065 16:52:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:06:49.065 16:52:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:06:49.065 16:52:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:06:49.065 16:52:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:49.065 16:52:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:06:49.065 16:52:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:06:49.065 16:52:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:06:49.065 16:52:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:06:49.065 16:52:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:06:49.066 16:52:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:49.324 00:06:49.324 real 0m15.348s 00:06:49.324 user 0m10.501s 00:06:49.324 sys 0m12.762s 00:06:49.324 16:52:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.325 16:52:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:49.325 ************************************ 00:06:49.325 END TEST dd_uring_copy 00:06:49.325 ************************************ 00:06:49.325 00:06:49.325 real 0m15.580s 00:06:49.325 user 0m10.635s 00:06:49.325 sys 0m12.866s 00:06:49.325 16:52:07 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.325 16:52:07 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:49.325 ************************************ 00:06:49.325 END TEST spdk_dd_uring 00:06:49.325 ************************************ 00:06:49.325 16:52:07 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:49.325 16:52:07 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.325 16:52:07 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.325 16:52:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:49.325 ************************************ 00:06:49.325 START TEST spdk_dd_sparse 00:06:49.325 ************************************ 00:06:49.325 16:52:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:49.325 * Looking for test storage... 00:06:49.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:49.325 16:52:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:49.325 16:52:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:06:49.325 16:52:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.584 16:52:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:49.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.584 --rc genhtml_branch_coverage=1 00:06:49.585 --rc genhtml_function_coverage=1 00:06:49.585 --rc genhtml_legend=1 00:06:49.585 --rc geninfo_all_blocks=1 00:06:49.585 --rc geninfo_unexecuted_blocks=1 00:06:49.585 00:06:49.585 ' 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:49.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.585 --rc genhtml_branch_coverage=1 00:06:49.585 --rc genhtml_function_coverage=1 00:06:49.585 --rc genhtml_legend=1 00:06:49.585 --rc geninfo_all_blocks=1 00:06:49.585 --rc geninfo_unexecuted_blocks=1 00:06:49.585 00:06:49.585 ' 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:49.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.585 --rc genhtml_branch_coverage=1 00:06:49.585 --rc genhtml_function_coverage=1 00:06:49.585 --rc genhtml_legend=1 00:06:49.585 --rc geninfo_all_blocks=1 00:06:49.585 --rc geninfo_unexecuted_blocks=1 00:06:49.585 00:06:49.585 ' 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:49.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.585 --rc genhtml_branch_coverage=1 00:06:49.585 --rc genhtml_function_coverage=1 00:06:49.585 --rc genhtml_legend=1 00:06:49.585 --rc geninfo_all_blocks=1 00:06:49.585 --rc geninfo_unexecuted_blocks=1 00:06:49.585 00:06:49.585 ' 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:06:49.585 1+0 records in 00:06:49.585 1+0 records out 00:06:49.585 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00786407 s, 533 MB/s 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:06:49.585 1+0 records in 00:06:49.585 1+0 records out 00:06:49.585 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00728856 s, 575 MB/s 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:06:49.585 1+0 records in 00:06:49.585 1+0 records out 00:06:49.585 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00470516 s, 891 MB/s 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.585 16:52:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:49.585 ************************************ 00:06:49.585 START TEST dd_sparse_file_to_file 00:06:49.585 ************************************ 00:06:49.585 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:06:49.585 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:06:49.585 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:06:49.585 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:49.585 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:06:49.585 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:06:49.585 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:06:49.585 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:06:49.585 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:06:49.585 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:49.585 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:49.585 [2024-11-20 16:52:08.060269] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:49.585 [2024-11-20 16:52:08.060386] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61456 ] 00:06:49.585 { 00:06:49.585 "subsystems": [ 00:06:49.585 { 00:06:49.585 "subsystem": "bdev", 00:06:49.585 "config": [ 00:06:49.585 { 00:06:49.585 "params": { 00:06:49.585 "block_size": 4096, 00:06:49.585 "filename": "dd_sparse_aio_disk", 00:06:49.585 "name": "dd_aio" 00:06:49.585 }, 00:06:49.585 "method": "bdev_aio_create" 00:06:49.585 }, 00:06:49.585 { 00:06:49.585 "params": { 00:06:49.585 "lvs_name": "dd_lvstore", 00:06:49.585 "bdev_name": "dd_aio" 00:06:49.585 }, 00:06:49.585 "method": "bdev_lvol_create_lvstore" 00:06:49.585 }, 00:06:49.586 { 00:06:49.586 "method": "bdev_wait_for_examine" 00:06:49.586 } 00:06:49.586 ] 00:06:49.586 } 00:06:49.586 ] 00:06:49.586 } 00:06:49.845 [2024-11-20 16:52:08.205739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.845 [2024-11-20 16:52:08.257279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.845 [2024-11-20 16:52:08.311209] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.104  [2024-11-20T16:52:08.640Z] Copying: 12/36 [MB] (average 750 MBps) 00:06:50.104 00:06:50.363 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:06:50.363 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:06:50.363 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:06:50.363 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:06:50.363 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:50.363 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:06:50.363 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:06:50.363 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:06:50.363 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:06:50.363 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:50.363 00:06:50.363 real 0m0.660s 00:06:50.363 user 0m0.423s 00:06:50.363 sys 0m0.352s 00:06:50.363 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.363 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:50.363 ************************************ 00:06:50.363 END TEST dd_sparse_file_to_file 00:06:50.363 ************************************ 00:06:50.363 16:52:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:06:50.363 16:52:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.363 16:52:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.363 16:52:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:50.363 ************************************ 00:06:50.364 START TEST dd_sparse_file_to_bdev 00:06:50.364 ************************************ 00:06:50.364 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:06:50.364 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:50.364 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:06:50.364 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:06:50.364 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:06:50.364 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:06:50.364 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:06:50.364 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:50.364 16:52:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:50.364 [2024-11-20 16:52:08.766698] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:50.364 [2024-11-20 16:52:08.766810] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61504 ] 00:06:50.364 { 00:06:50.364 "subsystems": [ 00:06:50.364 { 00:06:50.364 "subsystem": "bdev", 00:06:50.364 "config": [ 00:06:50.364 { 00:06:50.364 "params": { 00:06:50.364 "block_size": 4096, 00:06:50.364 "filename": "dd_sparse_aio_disk", 00:06:50.364 "name": "dd_aio" 00:06:50.364 }, 00:06:50.364 "method": "bdev_aio_create" 00:06:50.364 }, 00:06:50.364 { 00:06:50.364 "params": { 00:06:50.364 "lvs_name": "dd_lvstore", 00:06:50.364 "lvol_name": "dd_lvol", 00:06:50.364 "size_in_mib": 36, 00:06:50.364 "thin_provision": true 00:06:50.364 }, 00:06:50.364 "method": "bdev_lvol_create" 00:06:50.364 }, 00:06:50.364 { 00:06:50.364 "method": "bdev_wait_for_examine" 00:06:50.364 } 00:06:50.364 ] 00:06:50.364 } 00:06:50.364 ] 00:06:50.364 } 00:06:50.623 [2024-11-20 16:52:08.907692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.623 [2024-11-20 16:52:08.967148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.623 [2024-11-20 16:52:09.023001] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.623  [2024-11-20T16:52:09.418Z] Copying: 12/36 [MB] (average 480 MBps) 00:06:50.882 00:06:50.882 00:06:50.882 real 0m0.634s 00:06:50.882 user 0m0.400s 00:06:50.882 sys 0m0.353s 00:06:50.882 16:52:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.882 ************************************ 00:06:50.882 END TEST dd_sparse_file_to_bdev 00:06:50.882 ************************************ 00:06:50.882 16:52:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:50.882 16:52:09 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:06:50.882 16:52:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.882 16:52:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.882 16:52:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:50.882 ************************************ 00:06:50.882 START TEST dd_sparse_bdev_to_file 00:06:50.882 ************************************ 00:06:50.882 16:52:09 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:06:50.882 16:52:09 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:06:50.882 16:52:09 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:06:50.882 16:52:09 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:50.882 16:52:09 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:06:50.882 16:52:09 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:06:50.882 16:52:09 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:06:50.882 16:52:09 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:50.882 16:52:09 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:51.141 [2024-11-20 16:52:09.457010] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:51.141 [2024-11-20 16:52:09.457156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61536 ] 00:06:51.141 { 00:06:51.141 "subsystems": [ 00:06:51.141 { 00:06:51.141 "subsystem": "bdev", 00:06:51.141 "config": [ 00:06:51.141 { 00:06:51.141 "params": { 00:06:51.141 "block_size": 4096, 00:06:51.141 "filename": "dd_sparse_aio_disk", 00:06:51.141 "name": "dd_aio" 00:06:51.141 }, 00:06:51.141 "method": "bdev_aio_create" 00:06:51.141 }, 00:06:51.141 { 00:06:51.141 "method": "bdev_wait_for_examine" 00:06:51.141 } 00:06:51.141 ] 00:06:51.141 } 00:06:51.141 ] 00:06:51.141 } 00:06:51.141 [2024-11-20 16:52:09.604979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.141 [2024-11-20 16:52:09.663627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.399 [2024-11-20 16:52:09.718761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.399  [2024-11-20T16:52:10.232Z] Copying: 12/36 [MB] (average 923 MBps) 00:06:51.696 00:06:51.696 16:52:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:06:51.696 16:52:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:06:51.696 16:52:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:06:51.696 16:52:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:06:51.696 16:52:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:51.696 16:52:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:06:51.696 16:52:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:06:51.696 16:52:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:06:51.696 16:52:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:06:51.696 16:52:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:51.696 00:06:51.696 real 0m0.657s 00:06:51.696 user 0m0.410s 00:06:51.696 sys 0m0.361s 00:06:51.696 16:52:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.696 16:52:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:51.696 ************************************ 00:06:51.696 END TEST dd_sparse_bdev_to_file 00:06:51.696 ************************************ 00:06:51.696 16:52:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:06:51.696 16:52:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:06:51.696 16:52:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:06:51.696 16:52:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:06:51.696 16:52:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:06:51.696 00:06:51.696 real 0m2.367s 00:06:51.696 user 0m1.404s 00:06:51.696 sys 0m1.307s 00:06:51.696 16:52:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.696 16:52:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:51.696 ************************************ 00:06:51.696 END TEST spdk_dd_sparse 00:06:51.696 ************************************ 00:06:51.696 16:52:10 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:51.696 16:52:10 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.696 16:52:10 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.696 16:52:10 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:51.696 ************************************ 00:06:51.696 START TEST spdk_dd_negative 00:06:51.696 ************************************ 00:06:51.696 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:51.956 * Looking for test storage... 00:06:51.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:51.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.956 --rc genhtml_branch_coverage=1 00:06:51.956 --rc genhtml_function_coverage=1 00:06:51.956 --rc genhtml_legend=1 00:06:51.956 --rc geninfo_all_blocks=1 00:06:51.956 --rc geninfo_unexecuted_blocks=1 00:06:51.956 00:06:51.956 ' 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:51.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.956 --rc genhtml_branch_coverage=1 00:06:51.956 --rc genhtml_function_coverage=1 00:06:51.956 --rc genhtml_legend=1 00:06:51.956 --rc geninfo_all_blocks=1 00:06:51.956 --rc geninfo_unexecuted_blocks=1 00:06:51.956 00:06:51.956 ' 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:51.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.956 --rc genhtml_branch_coverage=1 00:06:51.956 --rc genhtml_function_coverage=1 00:06:51.956 --rc genhtml_legend=1 00:06:51.956 --rc geninfo_all_blocks=1 00:06:51.956 --rc geninfo_unexecuted_blocks=1 00:06:51.956 00:06:51.956 ' 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:51.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.956 --rc genhtml_branch_coverage=1 00:06:51.956 --rc genhtml_function_coverage=1 00:06:51.956 --rc genhtml_legend=1 00:06:51.956 --rc geninfo_all_blocks=1 00:06:51.956 --rc geninfo_unexecuted_blocks=1 00:06:51.956 00:06:51.956 ' 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:51.956 ************************************ 00:06:51.956 START TEST dd_invalid_arguments 00:06:51.956 ************************************ 00:06:51.956 16:52:10 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:06:51.957 16:52:10 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:51.957 16:52:10 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:06:51.957 16:52:10 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:51.957 16:52:10 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.957 16:52:10 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.957 16:52:10 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.957 16:52:10 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.957 16:52:10 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.957 16:52:10 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.957 16:52:10 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.957 16:52:10 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:51.957 16:52:10 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:51.957 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:06:51.957 00:06:51.957 CPU options: 00:06:51.957 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:06:51.957 (like [0,1,10]) 00:06:51.957 --lcores lcore to CPU mapping list. The list is in the format: 00:06:51.957 [<,lcores[@CPUs]>...] 00:06:51.957 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:51.957 Within the group, '-' is used for range separator, 00:06:51.957 ',' is used for single number separator. 00:06:51.957 '( )' can be omitted for single element group, 00:06:51.957 '@' can be omitted if cpus and lcores have the same value 00:06:51.957 --disable-cpumask-locks Disable CPU core lock files. 00:06:51.957 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:06:51.957 pollers in the app support interrupt mode) 00:06:51.957 -p, --main-core main (primary) core for DPDK 00:06:51.957 00:06:51.957 Configuration options: 00:06:51.957 -c, --config, --json JSON config file 00:06:51.957 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:51.957 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:06:51.957 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:51.957 --rpcs-allowed comma-separated list of permitted RPCS 00:06:51.957 --json-ignore-init-errors don't exit on invalid config entry 00:06:51.957 00:06:51.957 Memory options: 00:06:51.957 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:51.957 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:51.957 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:51.957 -R, --huge-unlink unlink huge files after initialization 00:06:51.957 -n, --mem-channels number of memory channels used for DPDK 00:06:51.957 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:51.957 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:51.957 --no-huge run without using hugepages 00:06:51.957 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:06:51.957 -i, --shm-id shared memory ID (optional) 00:06:51.957 -g, --single-file-segments force creating just one hugetlbfs file 00:06:51.957 00:06:51.957 PCI options: 00:06:51.957 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:51.957 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:51.957 -u, --no-pci disable PCI access 00:06:51.957 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:51.957 00:06:51.957 Log options: 00:06:51.957 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:06:51.957 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:06:51.957 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:06:51.957 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:06:51.957 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:06:51.957 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:06:51.957 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:06:51.957 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:06:51.957 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:06:51.957 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:06:51.957 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:06:51.957 --silence-noticelog disable notice level logging to stderr 00:06:51.957 00:06:51.957 Trace options: 00:06:51.957 --num-trace-entries number of trace entries for each core, must be power of 2, 00:06:51.957 setting 0 to disable trace (default 32768) 00:06:51.957 Tracepoints vary in size and can use more than one trace entry. 00:06:51.957 -e, --tpoint-group [:] 00:06:51.957 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:06:51.957 [2024-11-20 16:52:10.442196] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:06:51.957 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:06:51.957 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:06:51.957 bdev_raid, scheduler, all). 00:06:51.957 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:06:51.957 a tracepoint group. First tpoint inside a group can be enabled by 00:06:51.957 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:06:51.957 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:06:51.957 in /include/spdk_internal/trace_defs.h 00:06:51.957 00:06:51.957 Other options: 00:06:51.957 -h, --help show this usage 00:06:51.957 -v, --version print SPDK version 00:06:51.957 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:51.957 --env-context Opaque context for use of the env implementation 00:06:51.957 00:06:51.957 Application specific: 00:06:51.957 [--------- DD Options ---------] 00:06:51.957 --if Input file. Must specify either --if or --ib. 00:06:51.957 --ib Input bdev. Must specifier either --if or --ib 00:06:51.957 --of Output file. Must specify either --of or --ob. 00:06:51.957 --ob Output bdev. Must specify either --of or --ob. 00:06:51.957 --iflag Input file flags. 00:06:51.957 --oflag Output file flags. 00:06:51.957 --bs I/O unit size (default: 4096) 00:06:51.957 --qd Queue depth (default: 2) 00:06:51.957 --count I/O unit count. The number of I/O units to copy. (default: all) 00:06:51.957 --skip Skip this many I/O units at start of input. (default: 0) 00:06:51.957 --seek Skip this many I/O units at start of output. (default: 0) 00:06:51.957 --aio Force usage of AIO. (by default io_uring is used if available) 00:06:51.957 --sparse Enable hole skipping in input target 00:06:51.957 Available iflag and oflag values: 00:06:51.957 append - append mode 00:06:51.957 direct - use direct I/O for data 00:06:51.957 directory - fail unless a directory 00:06:51.957 dsync - use synchronized I/O for data 00:06:51.957 noatime - do not update access time 00:06:51.957 noctty - do not assign controlling terminal from file 00:06:51.957 nofollow - do not follow symlinks 00:06:51.957 nonblock - use non-blocking I/O 00:06:51.957 sync - use synchronized I/O for data and metadata 00:06:51.957 16:52:10 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:06:51.957 16:52:10 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.957 16:52:10 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:51.958 16:52:10 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.958 00:06:51.958 real 0m0.077s 00:06:51.958 user 0m0.049s 00:06:51.958 sys 0m0.027s 00:06:51.958 16:52:10 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.958 16:52:10 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:06:51.958 ************************************ 00:06:51.958 END TEST dd_invalid_arguments 00:06:51.958 ************************************ 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:52.217 ************************************ 00:06:52.217 START TEST dd_double_input 00:06:52.217 ************************************ 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:52.217 [2024-11-20 16:52:10.571332] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.217 00:06:52.217 real 0m0.077s 00:06:52.217 user 0m0.055s 00:06:52.217 sys 0m0.022s 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:06:52.217 ************************************ 00:06:52.217 END TEST dd_double_input 00:06:52.217 ************************************ 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:52.217 ************************************ 00:06:52.217 START TEST dd_double_output 00:06:52.217 ************************************ 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:52.217 [2024-11-20 16:52:10.703703] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.217 00:06:52.217 real 0m0.078s 00:06:52.217 user 0m0.048s 00:06:52.217 sys 0m0.029s 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.217 16:52:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:06:52.218 ************************************ 00:06:52.218 END TEST dd_double_output 00:06:52.218 ************************************ 00:06:52.477 16:52:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:06:52.477 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.477 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.477 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:52.477 ************************************ 00:06:52.477 START TEST dd_no_input 00:06:52.477 ************************************ 00:06:52.477 16:52:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:06:52.477 16:52:10 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:52.477 16:52:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:06:52.477 16:52:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:52.477 16:52:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.477 16:52:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.477 16:52:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.477 16:52:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.477 16:52:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.477 16:52:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.477 16:52:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.477 16:52:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.477 16:52:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:52.477 [2024-11-20 16:52:10.835708] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:06:52.477 16:52:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:06:52.477 16:52:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.477 16:52:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:52.477 16:52:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.477 00:06:52.477 real 0m0.078s 00:06:52.477 user 0m0.044s 00:06:52.477 sys 0m0.033s 00:06:52.477 16:52:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.477 16:52:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:06:52.477 ************************************ 00:06:52.477 END TEST dd_no_input 00:06:52.478 ************************************ 00:06:52.478 16:52:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:06:52.478 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.478 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.478 16:52:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:52.478 ************************************ 00:06:52.478 START TEST dd_no_output 00:06:52.478 ************************************ 00:06:52.478 16:52:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:06:52.478 16:52:10 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:52.478 16:52:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:06:52.478 16:52:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:52.478 16:52:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.478 16:52:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.478 16:52:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.478 16:52:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.478 16:52:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.478 16:52:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.478 16:52:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.478 16:52:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.478 16:52:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:52.478 [2024-11-20 16:52:10.966642] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:06:52.478 16:52:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:06:52.478 16:52:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.478 16:52:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:52.478 16:52:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.478 00:06:52.478 real 0m0.077s 00:06:52.478 user 0m0.050s 00:06:52.478 sys 0m0.026s 00:06:52.478 16:52:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.478 16:52:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:06:52.478 ************************************ 00:06:52.478 END TEST dd_no_output 00:06:52.478 ************************************ 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:52.738 ************************************ 00:06:52.738 START TEST dd_wrong_blocksize 00:06:52.738 ************************************ 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:52.738 [2024-11-20 16:52:11.095923] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.738 00:06:52.738 real 0m0.077s 00:06:52.738 user 0m0.051s 00:06:52.738 sys 0m0.025s 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:52.738 ************************************ 00:06:52.738 END TEST dd_wrong_blocksize 00:06:52.738 ************************************ 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:52.738 ************************************ 00:06:52.738 START TEST dd_smaller_blocksize 00:06:52.738 ************************************ 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.738 16:52:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:52.738 [2024-11-20 16:52:11.234143] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:52.738 [2024-11-20 16:52:11.234278] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61763 ] 00:06:52.997 [2024-11-20 16:52:11.387426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.997 [2024-11-20 16:52:11.450193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.997 [2024-11-20 16:52:11.507582] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.564 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:53.564 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:53.823 [2024-11-20 16:52:12.117372] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:06:53.823 [2024-11-20 16:52:12.117476] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:53.823 [2024-11-20 16:52:12.238766] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:53.823 16:52:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:06:53.823 16:52:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:53.823 16:52:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:06:53.823 16:52:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:06:53.823 16:52:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:06:53.823 16:52:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:53.823 00:06:53.823 real 0m1.129s 00:06:53.823 user 0m0.412s 00:06:53.823 sys 0m0.609s 00:06:53.823 16:52:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.823 ************************************ 00:06:53.823 END TEST dd_smaller_blocksize 00:06:53.823 ************************************ 00:06:53.823 16:52:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:53.823 16:52:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:06:53.823 16:52:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.823 16:52:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.823 16:52:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:53.823 ************************************ 00:06:53.823 START TEST dd_invalid_count 00:06:53.823 ************************************ 00:06:53.823 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:54.082 [2024-11-20 16:52:12.408300] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:54.082 00:06:54.082 real 0m0.068s 00:06:54.082 user 0m0.044s 00:06:54.082 sys 0m0.020s 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:06:54.082 ************************************ 00:06:54.082 END TEST dd_invalid_count 00:06:54.082 ************************************ 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:54.082 ************************************ 00:06:54.082 START TEST dd_invalid_oflag 00:06:54.082 ************************************ 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.082 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.083 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:54.083 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:54.083 [2024-11-20 16:52:12.537311] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:06:54.083 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:06:54.083 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:54.083 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:54.083 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:54.083 00:06:54.083 real 0m0.079s 00:06:54.083 user 0m0.052s 00:06:54.083 sys 0m0.024s 00:06:54.083 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.083 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:06:54.083 ************************************ 00:06:54.083 END TEST dd_invalid_oflag 00:06:54.083 ************************************ 00:06:54.083 16:52:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:06:54.083 16:52:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.083 16:52:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.083 16:52:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:54.083 ************************************ 00:06:54.083 START TEST dd_invalid_iflag 00:06:54.083 ************************************ 00:06:54.083 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:06:54.083 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:54.342 [2024-11-20 16:52:12.675710] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:54.342 00:06:54.342 real 0m0.081s 00:06:54.342 user 0m0.049s 00:06:54.342 sys 0m0.029s 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:06:54.342 ************************************ 00:06:54.342 END TEST dd_invalid_iflag 00:06:54.342 ************************************ 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:54.342 ************************************ 00:06:54.342 START TEST dd_unknown_flag 00:06:54.342 ************************************ 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:54.342 16:52:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:54.342 [2024-11-20 16:52:12.815344] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:54.342 [2024-11-20 16:52:12.815465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61866 ] 00:06:54.601 [2024-11-20 16:52:12.966993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.601 [2024-11-20 16:52:13.024260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.601 [2024-11-20 16:52:13.075687] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.601 [2024-11-20 16:52:13.111170] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:06:54.601 [2024-11-20 16:52:13.111269] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:54.601 [2024-11-20 16:52:13.111325] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:06:54.601 [2024-11-20 16:52:13.111338] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:54.601 [2024-11-20 16:52:13.111577] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:06:54.601 [2024-11-20 16:52:13.111593] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:54.601 [2024-11-20 16:52:13.111648] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:54.601 [2024-11-20 16:52:13.111658] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:54.860 [2024-11-20 16:52:13.228055] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:54.860 00:06:54.860 real 0m0.542s 00:06:54.860 user 0m0.297s 00:06:54.860 sys 0m0.144s 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:06:54.860 ************************************ 00:06:54.860 END TEST dd_unknown_flag 00:06:54.860 ************************************ 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:54.860 ************************************ 00:06:54.860 START TEST dd_invalid_json 00:06:54.860 ************************************ 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:54.860 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:55.119 [2024-11-20 16:52:13.411371] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:55.119 [2024-11-20 16:52:13.411488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61889 ] 00:06:55.119 [2024-11-20 16:52:13.563367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.119 [2024-11-20 16:52:13.622249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.119 [2024-11-20 16:52:13.622349] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:06:55.119 [2024-11-20 16:52:13.622366] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:55.119 [2024-11-20 16:52:13.622376] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.119 [2024-11-20 16:52:13.622412] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:55.385 00:06:55.385 real 0m0.340s 00:06:55.385 user 0m0.175s 00:06:55.385 sys 0m0.061s 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:06:55.385 ************************************ 00:06:55.385 END TEST dd_invalid_json 00:06:55.385 ************************************ 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:55.385 ************************************ 00:06:55.385 START TEST dd_invalid_seek 00:06:55.385 ************************************ 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:55.385 16:52:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:55.385 { 00:06:55.385 "subsystems": [ 00:06:55.385 { 00:06:55.385 "subsystem": "bdev", 00:06:55.385 "config": [ 00:06:55.385 { 00:06:55.385 "params": { 00:06:55.385 "block_size": 512, 00:06:55.385 "num_blocks": 512, 00:06:55.385 "name": "malloc0" 00:06:55.385 }, 00:06:55.385 "method": "bdev_malloc_create" 00:06:55.385 }, 00:06:55.385 { 00:06:55.385 "params": { 00:06:55.385 "block_size": 512, 00:06:55.385 "num_blocks": 512, 00:06:55.385 "name": "malloc1" 00:06:55.385 }, 00:06:55.385 "method": "bdev_malloc_create" 00:06:55.385 }, 00:06:55.385 { 00:06:55.385 "method": "bdev_wait_for_examine" 00:06:55.385 } 00:06:55.385 ] 00:06:55.385 } 00:06:55.385 ] 00:06:55.385 } 00:06:55.385 [2024-11-20 16:52:13.797811] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:55.385 [2024-11-20 16:52:13.797943] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61924 ] 00:06:55.643 [2024-11-20 16:52:13.945154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.643 [2024-11-20 16:52:14.009118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.643 [2024-11-20 16:52:14.062414] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.644 [2024-11-20 16:52:14.126788] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:06:55.644 [2024-11-20 16:52:14.126865] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.903 [2024-11-20 16:52:14.245275] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:55.903 00:06:55.903 real 0m0.575s 00:06:55.903 user 0m0.378s 00:06:55.903 sys 0m0.149s 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:06:55.903 ************************************ 00:06:55.903 END TEST dd_invalid_seek 00:06:55.903 ************************************ 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:55.903 ************************************ 00:06:55.903 START TEST dd_invalid_skip 00:06:55.903 ************************************ 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:55.903 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:55.903 { 00:06:55.903 "subsystems": [ 00:06:55.903 { 00:06:55.903 "subsystem": "bdev", 00:06:55.903 "config": [ 00:06:55.903 { 00:06:55.903 "params": { 00:06:55.903 "block_size": 512, 00:06:55.903 "num_blocks": 512, 00:06:55.903 "name": "malloc0" 00:06:55.903 }, 00:06:55.903 "method": "bdev_malloc_create" 00:06:55.903 }, 00:06:55.903 { 00:06:55.903 "params": { 00:06:55.903 "block_size": 512, 00:06:55.903 "num_blocks": 512, 00:06:55.903 "name": "malloc1" 00:06:55.903 }, 00:06:55.903 "method": "bdev_malloc_create" 00:06:55.903 }, 00:06:55.903 { 00:06:55.903 "method": "bdev_wait_for_examine" 00:06:55.903 } 00:06:55.903 ] 00:06:55.903 } 00:06:55.903 ] 00:06:55.903 } 00:06:55.903 [2024-11-20 16:52:14.428373] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:55.904 [2024-11-20 16:52:14.428469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61952 ] 00:06:56.163 [2024-11-20 16:52:14.574726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.163 [2024-11-20 16:52:14.621308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.163 [2024-11-20 16:52:14.674404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.421 [2024-11-20 16:52:14.738686] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:06:56.421 [2024-11-20 16:52:14.738758] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.421 [2024-11-20 16:52:14.860097] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:56.421 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:06:56.421 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:56.421 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:06:56.421 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:06:56.421 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:06:56.421 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:56.421 00:06:56.421 real 0m0.553s 00:06:56.421 user 0m0.352s 00:06:56.421 sys 0m0.159s 00:06:56.421 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.421 ************************************ 00:06:56.421 END TEST dd_invalid_skip 00:06:56.421 ************************************ 00:06:56.421 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:06:56.678 16:52:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:06:56.678 16:52:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.678 16:52:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.678 16:52:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:56.678 ************************************ 00:06:56.678 START TEST dd_invalid_input_count 00:06:56.678 ************************************ 00:06:56.678 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:06:56.678 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:56.678 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:56.678 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:06:56.678 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:56.678 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:56.678 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:06:56.678 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:56.678 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:06:56.678 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:56.679 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.679 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:06:56.679 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:06:56.679 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:06:56.679 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.679 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.679 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.679 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.679 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.679 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.679 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:56.679 16:52:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:56.679 [2024-11-20 16:52:15.032912] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:56.679 [2024-11-20 16:52:15.033038] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61991 ] 00:06:56.679 { 00:06:56.679 "subsystems": [ 00:06:56.679 { 00:06:56.679 "subsystem": "bdev", 00:06:56.679 "config": [ 00:06:56.679 { 00:06:56.679 "params": { 00:06:56.679 "block_size": 512, 00:06:56.679 "num_blocks": 512, 00:06:56.679 "name": "malloc0" 00:06:56.679 }, 00:06:56.679 "method": "bdev_malloc_create" 00:06:56.679 }, 00:06:56.679 { 00:06:56.679 "params": { 00:06:56.679 "block_size": 512, 00:06:56.679 "num_blocks": 512, 00:06:56.679 "name": "malloc1" 00:06:56.679 }, 00:06:56.679 "method": "bdev_malloc_create" 00:06:56.679 }, 00:06:56.679 { 00:06:56.679 "method": "bdev_wait_for_examine" 00:06:56.679 } 00:06:56.679 ] 00:06:56.679 } 00:06:56.679 ] 00:06:56.679 } 00:06:56.679 [2024-11-20 16:52:15.181716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.937 [2024-11-20 16:52:15.233302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.937 [2024-11-20 16:52:15.286561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.937 [2024-11-20 16:52:15.350551] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:06:56.937 [2024-11-20 16:52:15.350628] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.937 [2024-11-20 16:52:15.468213] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:57.196 00:06:57.196 real 0m0.563s 00:06:57.196 user 0m0.365s 00:06:57.196 sys 0m0.160s 00:06:57.196 ************************************ 00:06:57.196 END TEST dd_invalid_input_count 00:06:57.196 ************************************ 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:57.196 ************************************ 00:06:57.196 START TEST dd_invalid_output_count 00:06:57.196 ************************************ 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:57.196 16:52:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:57.196 [2024-11-20 16:52:15.638839] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:57.196 [2024-11-20 16:52:15.638944] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62024 ] 00:06:57.196 { 00:06:57.196 "subsystems": [ 00:06:57.196 { 00:06:57.196 "subsystem": "bdev", 00:06:57.196 "config": [ 00:06:57.196 { 00:06:57.196 "params": { 00:06:57.196 "block_size": 512, 00:06:57.196 "num_blocks": 512, 00:06:57.196 "name": "malloc0" 00:06:57.196 }, 00:06:57.196 "method": "bdev_malloc_create" 00:06:57.197 }, 00:06:57.197 { 00:06:57.197 "method": "bdev_wait_for_examine" 00:06:57.197 } 00:06:57.197 ] 00:06:57.197 } 00:06:57.197 ] 00:06:57.197 } 00:06:57.455 [2024-11-20 16:52:15.784970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.455 [2024-11-20 16:52:15.843243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.455 [2024-11-20 16:52:15.895773] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.455 [2024-11-20 16:52:15.949556] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:06:57.455 [2024-11-20 16:52:15.949623] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:57.714 [2024-11-20 16:52:16.068822] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:57.714 00:06:57.714 real 0m0.548s 00:06:57.714 user 0m0.352s 00:06:57.714 sys 0m0.151s 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.714 ************************************ 00:06:57.714 END TEST dd_invalid_output_count 00:06:57.714 ************************************ 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:57.714 ************************************ 00:06:57.714 START TEST dd_bs_not_multiple 00:06:57.714 ************************************ 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:57.714 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:57.714 [2024-11-20 16:52:16.245407] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:57.714 [2024-11-20 16:52:16.245497] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62056 ] 00:06:57.973 { 00:06:57.973 "subsystems": [ 00:06:57.973 { 00:06:57.973 "subsystem": "bdev", 00:06:57.973 "config": [ 00:06:57.973 { 00:06:57.973 "params": { 00:06:57.973 "block_size": 512, 00:06:57.973 "num_blocks": 512, 00:06:57.973 "name": "malloc0" 00:06:57.973 }, 00:06:57.973 "method": "bdev_malloc_create" 00:06:57.973 }, 00:06:57.973 { 00:06:57.973 "params": { 00:06:57.973 "block_size": 512, 00:06:57.973 "num_blocks": 512, 00:06:57.973 "name": "malloc1" 00:06:57.973 }, 00:06:57.973 "method": "bdev_malloc_create" 00:06:57.973 }, 00:06:57.973 { 00:06:57.973 "method": "bdev_wait_for_examine" 00:06:57.973 } 00:06:57.973 ] 00:06:57.973 } 00:06:57.973 ] 00:06:57.973 } 00:06:57.973 [2024-11-20 16:52:16.394434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.973 [2024-11-20 16:52:16.443130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.973 [2024-11-20 16:52:16.496402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.232 [2024-11-20 16:52:16.559234] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:06:58.232 [2024-11-20 16:52:16.559326] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.232 [2024-11-20 16:52:16.681665] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:58.232 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:06:58.232 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:58.232 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:06:58.232 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:06:58.232 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:06:58.232 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:58.232 00:06:58.232 real 0m0.560s 00:06:58.232 user 0m0.358s 00:06:58.232 sys 0m0.165s 00:06:58.232 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.232 16:52:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:06:58.232 ************************************ 00:06:58.232 END TEST dd_bs_not_multiple 00:06:58.232 ************************************ 00:06:58.491 00:06:58.491 real 0m6.609s 00:06:58.491 user 0m3.538s 00:06:58.491 sys 0m2.452s 00:06:58.491 16:52:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.491 16:52:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:58.491 ************************************ 00:06:58.491 END TEST spdk_dd_negative 00:06:58.491 ************************************ 00:06:58.491 00:06:58.491 real 1m19.328s 00:06:58.491 user 0m50.780s 00:06:58.491 sys 0m35.385s 00:06:58.491 16:52:16 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.491 16:52:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:58.491 ************************************ 00:06:58.491 END TEST spdk_dd 00:06:58.491 ************************************ 00:06:58.491 16:52:16 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:58.491 16:52:16 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:58.491 16:52:16 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:58.491 16:52:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:58.491 16:52:16 -- common/autotest_common.sh@10 -- # set +x 00:06:58.491 16:52:16 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:58.491 16:52:16 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:58.491 16:52:16 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:58.491 16:52:16 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:58.491 16:52:16 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:58.491 16:52:16 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:58.491 16:52:16 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:58.491 16:52:16 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:58.491 16:52:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.491 16:52:16 -- common/autotest_common.sh@10 -- # set +x 00:06:58.491 ************************************ 00:06:58.491 START TEST nvmf_tcp 00:06:58.491 ************************************ 00:06:58.491 16:52:16 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:58.491 * Looking for test storage... 00:06:58.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:58.491 16:52:16 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:58.491 16:52:16 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:58.491 16:52:16 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:58.751 16:52:17 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.751 16:52:17 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:58.751 16:52:17 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.751 16:52:17 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:58.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.751 --rc genhtml_branch_coverage=1 00:06:58.751 --rc genhtml_function_coverage=1 00:06:58.751 --rc genhtml_legend=1 00:06:58.751 --rc geninfo_all_blocks=1 00:06:58.751 --rc geninfo_unexecuted_blocks=1 00:06:58.751 00:06:58.751 ' 00:06:58.751 16:52:17 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:58.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.751 --rc genhtml_branch_coverage=1 00:06:58.751 --rc genhtml_function_coverage=1 00:06:58.751 --rc genhtml_legend=1 00:06:58.751 --rc geninfo_all_blocks=1 00:06:58.751 --rc geninfo_unexecuted_blocks=1 00:06:58.751 00:06:58.751 ' 00:06:58.751 16:52:17 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:58.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.751 --rc genhtml_branch_coverage=1 00:06:58.751 --rc genhtml_function_coverage=1 00:06:58.751 --rc genhtml_legend=1 00:06:58.751 --rc geninfo_all_blocks=1 00:06:58.751 --rc geninfo_unexecuted_blocks=1 00:06:58.751 00:06:58.751 ' 00:06:58.751 16:52:17 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:58.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.751 --rc genhtml_branch_coverage=1 00:06:58.751 --rc genhtml_function_coverage=1 00:06:58.751 --rc genhtml_legend=1 00:06:58.751 --rc geninfo_all_blocks=1 00:06:58.751 --rc geninfo_unexecuted_blocks=1 00:06:58.751 00:06:58.751 ' 00:06:58.751 16:52:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:58.751 16:52:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:58.751 16:52:17 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:58.751 16:52:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:58.751 16:52:17 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.751 16:52:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.751 ************************************ 00:06:58.751 START TEST nvmf_target_core 00:06:58.751 ************************************ 00:06:58.751 16:52:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:58.751 * Looking for test storage... 00:06:58.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:58.751 16:52:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:58.751 16:52:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:58.751 16:52:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:58.751 16:52:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:58.751 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.751 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.751 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.751 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.751 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.751 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.751 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.751 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.751 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.751 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.751 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.010 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:59.010 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:59.010 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.010 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.010 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:59.010 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:59.010 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.010 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:59.010 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.010 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:59.010 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:59.010 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.010 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:59.010 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.010 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.010 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.010 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:59.010 16:52:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.010 16:52:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:59.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.010 --rc genhtml_branch_coverage=1 00:06:59.010 --rc genhtml_function_coverage=1 00:06:59.010 --rc genhtml_legend=1 00:06:59.010 --rc geninfo_all_blocks=1 00:06:59.010 --rc geninfo_unexecuted_blocks=1 00:06:59.010 00:06:59.010 ' 00:06:59.010 16:52:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:59.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.010 --rc genhtml_branch_coverage=1 00:06:59.010 --rc genhtml_function_coverage=1 00:06:59.010 --rc genhtml_legend=1 00:06:59.010 --rc geninfo_all_blocks=1 00:06:59.010 --rc geninfo_unexecuted_blocks=1 00:06:59.010 00:06:59.010 ' 00:06:59.010 16:52:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:59.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.011 --rc genhtml_branch_coverage=1 00:06:59.011 --rc genhtml_function_coverage=1 00:06:59.011 --rc genhtml_legend=1 00:06:59.011 --rc geninfo_all_blocks=1 00:06:59.011 --rc geninfo_unexecuted_blocks=1 00:06:59.011 00:06:59.011 ' 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:59.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.011 --rc genhtml_branch_coverage=1 00:06:59.011 --rc genhtml_function_coverage=1 00:06:59.011 --rc genhtml_legend=1 00:06:59.011 --rc geninfo_all_blocks=1 00:06:59.011 --rc geninfo_unexecuted_blocks=1 00:06:59.011 00:06:59.011 ' 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:59.011 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:59.011 ************************************ 00:06:59.011 START TEST nvmf_host_management 00:06:59.011 ************************************ 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:59.011 * Looking for test storage... 00:06:59.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:59.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.011 --rc genhtml_branch_coverage=1 00:06:59.011 --rc genhtml_function_coverage=1 00:06:59.011 --rc genhtml_legend=1 00:06:59.011 --rc geninfo_all_blocks=1 00:06:59.011 --rc geninfo_unexecuted_blocks=1 00:06:59.011 00:06:59.011 ' 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:59.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.011 --rc genhtml_branch_coverage=1 00:06:59.011 --rc genhtml_function_coverage=1 00:06:59.011 --rc genhtml_legend=1 00:06:59.011 --rc geninfo_all_blocks=1 00:06:59.011 --rc geninfo_unexecuted_blocks=1 00:06:59.011 00:06:59.011 ' 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:59.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.011 --rc genhtml_branch_coverage=1 00:06:59.011 --rc genhtml_function_coverage=1 00:06:59.011 --rc genhtml_legend=1 00:06:59.011 --rc geninfo_all_blocks=1 00:06:59.011 --rc geninfo_unexecuted_blocks=1 00:06:59.011 00:06:59.011 ' 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:59.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.011 --rc genhtml_branch_coverage=1 00:06:59.011 --rc genhtml_function_coverage=1 00:06:59.011 --rc genhtml_legend=1 00:06:59.011 --rc geninfo_all_blocks=1 00:06:59.011 --rc geninfo_unexecuted_blocks=1 00:06:59.011 00:06:59.011 ' 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:59.011 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:59.011 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:59.271 Cannot find device "nvmf_init_br" 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:59.271 Cannot find device "nvmf_init_br2" 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:59.271 Cannot find device "nvmf_tgt_br" 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:59.271 Cannot find device "nvmf_tgt_br2" 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:59.271 Cannot find device "nvmf_init_br" 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:59.271 Cannot find device "nvmf_init_br2" 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:59.271 Cannot find device "nvmf_tgt_br" 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:59.271 Cannot find device "nvmf_tgt_br2" 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:59.271 Cannot find device "nvmf_br" 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:59.271 Cannot find device "nvmf_init_if" 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:59.271 Cannot find device "nvmf_init_if2" 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:59.271 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:59.271 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:59.271 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:59.541 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:59.541 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:59.542 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:59.542 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:59.542 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:59.542 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:59.542 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:59.542 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:59.542 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:59.542 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:59.542 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:59.542 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:59.542 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:59.542 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:06:59.542 00:06:59.542 --- 10.0.0.3 ping statistics --- 00:06:59.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.542 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:06:59.542 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:59.542 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:59.542 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 00:06:59.542 00:06:59.542 --- 10.0.0.4 ping statistics --- 00:06:59.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.542 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:06:59.542 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:59.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:59.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:06:59.542 00:06:59.542 --- 10.0.0.1 ping statistics --- 00:06:59.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.542 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:06:59.542 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:59.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:59.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:06:59.542 00:06:59.542 --- 10.0.0.2 ping statistics --- 00:06:59.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.542 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:06:59.542 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:59.542 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:06:59.542 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:59.542 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:59.542 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:59.542 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:59.542 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:59.542 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:59.542 16:52:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:59.542 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:59.542 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:59.542 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:59.542 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:59.542 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:59.542 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.542 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62407 00:06:59.542 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:59.542 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62407 00:06:59.542 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62407 ']' 00:06:59.542 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.542 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.542 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.542 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.542 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.801 [2024-11-20 16:52:18.078984] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:06:59.801 [2024-11-20 16:52:18.079083] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.801 [2024-11-20 16:52:18.232280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:59.801 [2024-11-20 16:52:18.293911] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:59.801 [2024-11-20 16:52:18.293987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:59.801 [2024-11-20 16:52:18.294001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:59.801 [2024-11-20 16:52:18.294012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:59.801 [2024-11-20 16:52:18.294021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:59.801 [2024-11-20 16:52:18.295191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.801 [2024-11-20 16:52:18.295348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:59.801 [2024-11-20 16:52:18.295736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:59.801 [2024-11-20 16:52:18.295774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.061 [2024-11-20 16:52:18.351978] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.061 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.061 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:00.061 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:00.061 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:00.061 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.061 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:00.061 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:00.061 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.061 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.061 [2024-11-20 16:52:18.468100] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:00.061 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.061 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:00.061 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:00.061 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.061 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:00.061 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:00.061 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:00.061 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.061 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.061 Malloc0 00:07:00.061 [2024-11-20 16:52:18.547210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:00.061 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.061 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:00.061 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:00.061 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.320 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62448 00:07:00.320 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62448 /var/tmp/bdevperf.sock 00:07:00.320 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62448 ']' 00:07:00.320 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:00.320 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:00.320 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:00.320 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.320 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:00.320 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:00.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:00.320 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.320 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:00.320 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.320 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:00.320 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:00.320 { 00:07:00.320 "params": { 00:07:00.320 "name": "Nvme$subsystem", 00:07:00.320 "trtype": "$TEST_TRANSPORT", 00:07:00.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:00.320 "adrfam": "ipv4", 00:07:00.320 "trsvcid": "$NVMF_PORT", 00:07:00.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:00.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:00.320 "hdgst": ${hdgst:-false}, 00:07:00.320 "ddgst": ${ddgst:-false} 00:07:00.320 }, 00:07:00.320 "method": "bdev_nvme_attach_controller" 00:07:00.320 } 00:07:00.320 EOF 00:07:00.320 )") 00:07:00.320 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:00.320 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:00.320 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:00.320 16:52:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:00.320 "params": { 00:07:00.320 "name": "Nvme0", 00:07:00.320 "trtype": "tcp", 00:07:00.320 "traddr": "10.0.0.3", 00:07:00.320 "adrfam": "ipv4", 00:07:00.320 "trsvcid": "4420", 00:07:00.320 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:00.320 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:00.320 "hdgst": false, 00:07:00.320 "ddgst": false 00:07:00.320 }, 00:07:00.320 "method": "bdev_nvme_attach_controller" 00:07:00.320 }' 00:07:00.320 [2024-11-20 16:52:18.664516] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:07:00.320 [2024-11-20 16:52:18.664606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62448 ] 00:07:00.320 [2024-11-20 16:52:18.816650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.580 [2024-11-20 16:52:18.874475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.580 [2024-11-20 16:52:18.940296] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.580 Running I/O for 10 seconds... 00:07:00.841 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.841 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:00.841 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:00.841 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.841 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.842 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.842 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:00.842 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:00.842 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:00.842 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:00.842 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:00.842 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:00.842 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:00.842 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:00.842 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:00.842 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.842 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.842 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:00.842 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.842 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:00.842 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:00.842 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:01.104 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:01.104 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:01.104 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:01.104 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:01.104 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.104 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.104 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.104 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:07:01.104 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:07:01.104 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:01.104 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:01.104 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:01.104 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:01.104 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.104 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.104 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.104 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:01.104 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.104 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.104 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.104 16:52:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:01.104 task offset: 81920 on job bdev=Nvme0n1 fails 00:07:01.104 00:07:01.104 Latency(us) 00:07:01.104 [2024-11-20T16:52:19.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:01.104 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:01.104 Job: Nvme0n1 ended in about 0.46 seconds with error 00:07:01.104 Verification LBA range: start 0x0 length 0x400 00:07:01.104 Nvme0n1 : 0.46 1404.77 87.80 140.48 0.00 40003.73 2219.29 40751.48 00:07:01.104 [2024-11-20T16:52:19.640Z] =================================================================================================================== 00:07:01.104 [2024-11-20T16:52:19.640Z] Total : 1404.77 87.80 140.48 0.00 40003.73 2219.29 40751.48 00:07:01.104 [2024-11-20 16:52:19.523839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.104 [2024-11-20 16:52:19.523884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.104 [2024-11-20 16:52:19.523907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.104 [2024-11-20 16:52:19.523919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.104 [2024-11-20 16:52:19.523931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.104 [2024-11-20 16:52:19.523941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.104 [2024-11-20 16:52:19.523952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.104 [2024-11-20 16:52:19.523961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.104 [2024-11-20 16:52:19.523973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.104 [2024-11-20 16:52:19.523982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.523993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.105 [2024-11-20 16:52:19.524752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.105 [2024-11-20 16:52:19.524762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.106 [2024-11-20 16:52:19.524772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.524783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.106 [2024-11-20 16:52:19.524792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.524803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.106 [2024-11-20 16:52:19.524812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.524834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.106 [2024-11-20 16:52:19.524846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.524857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.106 [2024-11-20 16:52:19.524866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.524877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.106 [2024-11-20 16:52:19.524887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.524898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.106 [2024-11-20 16:52:19.524907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.524931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.106 [2024-11-20 16:52:19.524942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.524953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.106 [2024-11-20 16:52:19.524963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.524974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.106 [2024-11-20 16:52:19.524987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.525002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.106 [2024-11-20 16:52:19.525012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.525024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.106 [2024-11-20 16:52:19.525033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.525045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.106 [2024-11-20 16:52:19.525054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.525065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.106 [2024-11-20 16:52:19.525074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.525085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.106 [2024-11-20 16:52:19.525094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.525105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.106 [2024-11-20 16:52:19.525115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.525125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.106 [2024-11-20 16:52:19.525135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.525145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.106 [2024-11-20 16:52:19.525154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.525165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.106 [2024-11-20 16:52:19.525175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.525186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.106 [2024-11-20 16:52:19.525195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.525206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.106 [2024-11-20 16:52:19.525215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.525225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.106 [2024-11-20 16:52:19.525234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.525245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.106 [2024-11-20 16:52:19.525262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.525279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.106 [2024-11-20 16:52:19.525288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.525299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1129da0 is same with the state(6) to be set 00:07:01.106 [2024-11-20 16:52:19.525489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:01.106 [2024-11-20 16:52:19.525507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.525520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:01.106 [2024-11-20 16:52:19.525529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.525539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:01.106 [2024-11-20 16:52:19.525548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.525559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:01.106 [2024-11-20 16:52:19.525568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.106 [2024-11-20 16:52:19.525577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112ace0 is same with the state(6) to be set 00:07:01.106 [2024-11-20 16:52:19.526654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:01.106 [2024-11-20 16:52:19.528588] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.106 [2024-11-20 16:52:19.528610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x112ace0 (9): Bad file descriptor 00:07:01.106 [2024-11-20 16:52:19.535477] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:02.044 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62448 00:07:02.044 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62448) - No such process 00:07:02.044 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:02.044 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:02.044 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:02.044 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:02.044 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:02.044 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:02.044 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:02.044 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:02.044 { 00:07:02.044 "params": { 00:07:02.044 "name": "Nvme$subsystem", 00:07:02.044 "trtype": "$TEST_TRANSPORT", 00:07:02.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:02.044 "adrfam": "ipv4", 00:07:02.044 "trsvcid": "$NVMF_PORT", 00:07:02.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:02.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:02.044 "hdgst": ${hdgst:-false}, 00:07:02.044 "ddgst": ${ddgst:-false} 00:07:02.044 }, 00:07:02.044 "method": "bdev_nvme_attach_controller" 00:07:02.044 } 00:07:02.044 EOF 00:07:02.044 )") 00:07:02.044 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:02.044 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:02.044 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:02.044 16:52:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:02.044 "params": { 00:07:02.044 "name": "Nvme0", 00:07:02.044 "trtype": "tcp", 00:07:02.044 "traddr": "10.0.0.3", 00:07:02.044 "adrfam": "ipv4", 00:07:02.044 "trsvcid": "4420", 00:07:02.044 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:02.044 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:02.044 "hdgst": false, 00:07:02.044 "ddgst": false 00:07:02.044 }, 00:07:02.044 "method": "bdev_nvme_attach_controller" 00:07:02.044 }' 00:07:02.303 [2024-11-20 16:52:20.586814] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:07:02.303 [2024-11-20 16:52:20.586917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62488 ] 00:07:02.303 [2024-11-20 16:52:20.735844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.303 [2024-11-20 16:52:20.794602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.562 [2024-11-20 16:52:20.856517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.562 Running I/O for 1 seconds... 00:07:03.499 1472.00 IOPS, 92.00 MiB/s 00:07:03.499 Latency(us) 00:07:03.499 [2024-11-20T16:52:22.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:03.499 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:03.499 Verification LBA range: start 0x0 length 0x400 00:07:03.499 Nvme0n1 : 1.03 1489.84 93.12 0.00 0.00 42137.78 4259.84 39321.60 00:07:03.499 [2024-11-20T16:52:22.035Z] =================================================================================================================== 00:07:03.499 [2024-11-20T16:52:22.035Z] Total : 1489.84 93.12 0.00 0.00 42137.78 4259.84 39321.60 00:07:03.758 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:03.758 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:03.758 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:03.758 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:03.758 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:03.758 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:03.758 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:03.758 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:03.758 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:03.758 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:03.758 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:03.758 rmmod nvme_tcp 00:07:04.017 rmmod nvme_fabrics 00:07:04.017 rmmod nvme_keyring 00:07:04.017 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:04.017 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:04.017 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:04.017 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62407 ']' 00:07:04.017 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62407 00:07:04.017 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62407 ']' 00:07:04.017 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62407 00:07:04.017 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:04.017 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.017 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62407 00:07:04.017 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:04.017 killing process with pid 62407 00:07:04.017 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:04.018 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62407' 00:07:04.018 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62407 00:07:04.018 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62407 00:07:04.277 [2024-11-20 16:52:22.562985] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:04.277 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:04.277 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:04.277 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:04.277 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:04.277 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:04.277 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:04.277 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:04.277 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:04.277 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:04.277 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:04.277 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:04.277 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:04.277 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:04.277 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:04.277 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:04.277 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:04.277 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:04.277 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:04.277 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:04.277 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:04.277 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:04.536 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:04.536 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:04.536 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.536 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:04.536 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.536 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:07:04.536 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:04.536 00:07:04.536 real 0m5.528s 00:07:04.536 user 0m19.447s 00:07:04.536 sys 0m1.581s 00:07:04.536 ************************************ 00:07:04.536 END TEST nvmf_host_management 00:07:04.536 ************************************ 00:07:04.536 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.536 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.536 16:52:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:04.536 16:52:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:04.536 16:52:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.536 16:52:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:04.536 ************************************ 00:07:04.536 START TEST nvmf_lvol 00:07:04.537 ************************************ 00:07:04.537 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:04.537 * Looking for test storage... 00:07:04.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:04.537 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:04.537 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:04.537 16:52:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:04.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.797 --rc genhtml_branch_coverage=1 00:07:04.797 --rc genhtml_function_coverage=1 00:07:04.797 --rc genhtml_legend=1 00:07:04.797 --rc geninfo_all_blocks=1 00:07:04.797 --rc geninfo_unexecuted_blocks=1 00:07:04.797 00:07:04.797 ' 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:04.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.797 --rc genhtml_branch_coverage=1 00:07:04.797 --rc genhtml_function_coverage=1 00:07:04.797 --rc genhtml_legend=1 00:07:04.797 --rc geninfo_all_blocks=1 00:07:04.797 --rc geninfo_unexecuted_blocks=1 00:07:04.797 00:07:04.797 ' 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:04.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.797 --rc genhtml_branch_coverage=1 00:07:04.797 --rc genhtml_function_coverage=1 00:07:04.797 --rc genhtml_legend=1 00:07:04.797 --rc geninfo_all_blocks=1 00:07:04.797 --rc geninfo_unexecuted_blocks=1 00:07:04.797 00:07:04.797 ' 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:04.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.797 --rc genhtml_branch_coverage=1 00:07:04.797 --rc genhtml_function_coverage=1 00:07:04.797 --rc genhtml_legend=1 00:07:04.797 --rc geninfo_all_blocks=1 00:07:04.797 --rc geninfo_unexecuted_blocks=1 00:07:04.797 00:07:04.797 ' 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.797 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:04.798 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:04.798 Cannot find device "nvmf_init_br" 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:04.798 Cannot find device "nvmf_init_br2" 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:04.798 Cannot find device "nvmf_tgt_br" 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:04.798 Cannot find device "nvmf_tgt_br2" 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:04.798 Cannot find device "nvmf_init_br" 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:04.798 Cannot find device "nvmf_init_br2" 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:04.798 Cannot find device "nvmf_tgt_br" 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:04.798 Cannot find device "nvmf_tgt_br2" 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:04.798 Cannot find device "nvmf_br" 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:04.798 Cannot find device "nvmf_init_if" 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:04.798 Cannot find device "nvmf_init_if2" 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:04.798 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:04.798 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:04.798 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:05.057 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:05.058 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:05.058 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:07:05.058 00:07:05.058 --- 10.0.0.3 ping statistics --- 00:07:05.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.058 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:05.058 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:05.058 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:07:05.058 00:07:05.058 --- 10.0.0.4 ping statistics --- 00:07:05.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.058 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:05.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:05.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:07:05.058 00:07:05.058 --- 10.0.0.1 ping statistics --- 00:07:05.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.058 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:05.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:05.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:07:05.058 00:07:05.058 --- 10.0.0.2 ping statistics --- 00:07:05.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.058 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62763 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62763 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 62763 ']' 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.058 16:52:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:05.058 [2024-11-20 16:52:23.581848] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:07:05.058 [2024-11-20 16:52:23.582201] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.317 [2024-11-20 16:52:23.737209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:05.317 [2024-11-20 16:52:23.806532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:05.317 [2024-11-20 16:52:23.806600] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:05.317 [2024-11-20 16:52:23.806629] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:05.317 [2024-11-20 16:52:23.806640] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:05.317 [2024-11-20 16:52:23.806649] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:05.317 [2024-11-20 16:52:23.807955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.317 [2024-11-20 16:52:23.808045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.317 [2024-11-20 16:52:23.808052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.574 [2024-11-20 16:52:23.866085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.141 16:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.141 16:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:06.141 16:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:06.141 16:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.141 16:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:06.141 16:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.141 16:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:06.400 [2024-11-20 16:52:24.886888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.400 16:52:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:06.968 16:52:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:06.968 16:52:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:07.285 16:52:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:07.285 16:52:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:07.544 16:52:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:07.804 16:52:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=fb3c88f8-0d43-420e-b17a-dbeefa4b1637 00:07:07.804 16:52:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fb3c88f8-0d43-420e-b17a-dbeefa4b1637 lvol 20 00:07:08.062 16:52:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=bb13e863-0e71-44b9-8429-1849d94d2fe9 00:07:08.062 16:52:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:08.321 16:52:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bb13e863-0e71-44b9-8429-1849d94d2fe9 00:07:08.580 16:52:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:08.839 [2024-11-20 16:52:27.173202] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:08.839 16:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:09.097 16:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62844 00:07:09.098 16:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:09.098 16:52:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:10.050 16:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot bb13e863-0e71-44b9-8429-1849d94d2fe9 MY_SNAPSHOT 00:07:10.309 16:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=16336768-b385-4203-9557-63b2a1011531 00:07:10.309 16:52:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize bb13e863-0e71-44b9-8429-1849d94d2fe9 30 00:07:10.876 16:52:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 16336768-b385-4203-9557-63b2a1011531 MY_CLONE 00:07:11.136 16:52:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=28da8578-548d-4d5d-ab15-5e47bd18313a 00:07:11.136 16:52:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 28da8578-548d-4d5d-ab15-5e47bd18313a 00:07:11.713 16:52:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62844 00:07:19.885 Initializing NVMe Controllers 00:07:19.885 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:19.885 Controller IO queue size 128, less than required. 00:07:19.885 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:19.885 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:19.885 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:19.885 Initialization complete. Launching workers. 00:07:19.885 ======================================================== 00:07:19.885 Latency(us) 00:07:19.885 Device Information : IOPS MiB/s Average min max 00:07:19.885 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9176.30 35.84 13953.27 2041.05 86139.69 00:07:19.885 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8743.10 34.15 14642.45 2812.10 78378.43 00:07:19.885 ======================================================== 00:07:19.885 Total : 17919.40 70.00 14289.53 2041.05 86139.69 00:07:19.885 00:07:19.885 16:52:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:19.885 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete bb13e863-0e71-44b9-8429-1849d94d2fe9 00:07:19.885 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fb3c88f8-0d43-420e-b17a-dbeefa4b1637 00:07:20.144 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:20.144 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:20.144 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:20.144 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:20.144 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:20.403 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:20.403 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:20.403 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:20.403 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:20.403 rmmod nvme_tcp 00:07:20.403 rmmod nvme_fabrics 00:07:20.403 rmmod nvme_keyring 00:07:20.403 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:20.403 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:20.403 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:20.403 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62763 ']' 00:07:20.403 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62763 00:07:20.403 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 62763 ']' 00:07:20.403 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 62763 00:07:20.403 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:20.403 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.403 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62763 00:07:20.403 killing process with pid 62763 00:07:20.403 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.403 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.403 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62763' 00:07:20.403 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 62763 00:07:20.403 16:52:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 62763 00:07:20.662 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:20.662 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:20.662 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:20.662 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:20.662 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:20.662 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:20.662 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:20.662 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:20.662 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:20.662 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:20.662 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:20.662 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:20.662 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:20.662 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:20.662 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:20.662 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:20.662 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:20.662 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:20.662 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:20.662 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:20.922 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:20.922 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:20.922 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:20.922 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.922 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.922 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.922 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:07:20.922 ************************************ 00:07:20.922 END TEST nvmf_lvol 00:07:20.922 ************************************ 00:07:20.922 00:07:20.922 real 0m16.384s 00:07:20.922 user 1m6.966s 00:07:20.922 sys 0m4.371s 00:07:20.922 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.922 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:20.922 16:52:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:20.922 16:52:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:20.922 16:52:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.922 16:52:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:20.922 ************************************ 00:07:20.922 START TEST nvmf_lvs_grow 00:07:20.922 ************************************ 00:07:20.922 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:20.922 * Looking for test storage... 00:07:20.922 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:20.922 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:20.922 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:20.922 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:21.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.182 --rc genhtml_branch_coverage=1 00:07:21.182 --rc genhtml_function_coverage=1 00:07:21.182 --rc genhtml_legend=1 00:07:21.182 --rc geninfo_all_blocks=1 00:07:21.182 --rc geninfo_unexecuted_blocks=1 00:07:21.182 00:07:21.182 ' 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:21.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.182 --rc genhtml_branch_coverage=1 00:07:21.182 --rc genhtml_function_coverage=1 00:07:21.182 --rc genhtml_legend=1 00:07:21.182 --rc geninfo_all_blocks=1 00:07:21.182 --rc geninfo_unexecuted_blocks=1 00:07:21.182 00:07:21.182 ' 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:21.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.182 --rc genhtml_branch_coverage=1 00:07:21.182 --rc genhtml_function_coverage=1 00:07:21.182 --rc genhtml_legend=1 00:07:21.182 --rc geninfo_all_blocks=1 00:07:21.182 --rc geninfo_unexecuted_blocks=1 00:07:21.182 00:07:21.182 ' 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:21.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.182 --rc genhtml_branch_coverage=1 00:07:21.182 --rc genhtml_function_coverage=1 00:07:21.182 --rc genhtml_legend=1 00:07:21.182 --rc geninfo_all_blocks=1 00:07:21.182 --rc geninfo_unexecuted_blocks=1 00:07:21.182 00:07:21.182 ' 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:21.182 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:21.183 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:21.183 Cannot find device "nvmf_init_br" 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:21.183 Cannot find device "nvmf_init_br2" 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:21.183 Cannot find device "nvmf_tgt_br" 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:21.183 Cannot find device "nvmf_tgt_br2" 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:21.183 Cannot find device "nvmf_init_br" 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:21.183 Cannot find device "nvmf_init_br2" 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:21.183 Cannot find device "nvmf_tgt_br" 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:21.183 Cannot find device "nvmf_tgt_br2" 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:21.183 Cannot find device "nvmf_br" 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:07:21.183 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:21.184 Cannot find device "nvmf_init_if" 00:07:21.184 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:07:21.184 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:21.184 Cannot find device "nvmf_init_if2" 00:07:21.184 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:07:21.184 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:21.184 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:21.184 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:07:21.184 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:21.184 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:21.184 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:07:21.184 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:21.184 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:21.184 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:21.443 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:21.443 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:07:21.443 00:07:21.443 --- 10.0.0.3 ping statistics --- 00:07:21.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.443 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:21.443 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:21.443 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:07:21.443 00:07:21.443 --- 10.0.0.4 ping statistics --- 00:07:21.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.443 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:21.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:07:21.443 00:07:21.443 --- 10.0.0.1 ping statistics --- 00:07:21.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.443 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:07:21.443 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:21.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:07:21.444 00:07:21.444 --- 10.0.0.2 ping statistics --- 00:07:21.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.444 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:07:21.444 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.444 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:07:21.444 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:21.444 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.444 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:21.444 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:21.444 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.444 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:21.444 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:21.444 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:21.444 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:21.444 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:21.444 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:21.444 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63220 00:07:21.444 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63220 00:07:21.444 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63220 ']' 00:07:21.444 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.444 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:21.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.444 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.444 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.444 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.444 16:52:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:21.703 [2024-11-20 16:52:40.022710] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:07:21.703 [2024-11-20 16:52:40.022811] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.703 [2024-11-20 16:52:40.176189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.961 [2024-11-20 16:52:40.239470] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.961 [2024-11-20 16:52:40.239540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.961 [2024-11-20 16:52:40.239554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.961 [2024-11-20 16:52:40.239564] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.961 [2024-11-20 16:52:40.239573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.961 [2024-11-20 16:52:40.240058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.961 [2024-11-20 16:52:40.296629] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.529 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.529 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:22.529 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:22.529 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:22.529 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:22.529 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:22.529 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:22.787 [2024-11-20 16:52:41.289216] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.787 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:22.787 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.787 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.787 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:22.787 ************************************ 00:07:22.787 START TEST lvs_grow_clean 00:07:22.787 ************************************ 00:07:22.787 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:22.787 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:22.788 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:22.788 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:22.788 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:22.788 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:22.788 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:22.788 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:23.055 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:23.055 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:23.318 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:23.318 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:23.615 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7c03907d-712e-4e8f-929f-9f36c51f28ef 00:07:23.615 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7c03907d-712e-4e8f-929f-9f36c51f28ef 00:07:23.615 16:52:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:23.873 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:23.873 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:23.873 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7c03907d-712e-4e8f-929f-9f36c51f28ef lvol 150 00:07:24.132 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3497fd8f-1dbe-4b15-9ed3-3b85656ea740 00:07:24.132 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:24.132 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:24.391 [2024-11-20 16:52:42.783991] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:24.391 [2024-11-20 16:52:42.784536] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:24.391 true 00:07:24.391 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7c03907d-712e-4e8f-929f-9f36c51f28ef 00:07:24.391 16:52:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:24.649 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:24.649 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:24.908 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3497fd8f-1dbe-4b15-9ed3-3b85656ea740 00:07:25.167 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:25.426 [2024-11-20 16:52:43.824680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:25.426 16:52:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:25.684 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63308 00:07:25.684 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:25.684 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:25.684 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63308 /var/tmp/bdevperf.sock 00:07:25.684 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63308 ']' 00:07:25.684 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:25.684 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:25.684 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:25.684 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.684 16:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:25.684 [2024-11-20 16:52:44.152467] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:07:25.684 [2024-11-20 16:52:44.152550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63308 ] 00:07:25.943 [2024-11-20 16:52:44.302997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.943 [2024-11-20 16:52:44.371390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.943 [2024-11-20 16:52:44.428912] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.877 16:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.877 16:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:26.877 16:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:27.136 Nvme0n1 00:07:27.136 16:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:27.394 [ 00:07:27.394 { 00:07:27.394 "name": "Nvme0n1", 00:07:27.394 "aliases": [ 00:07:27.394 "3497fd8f-1dbe-4b15-9ed3-3b85656ea740" 00:07:27.394 ], 00:07:27.394 "product_name": "NVMe disk", 00:07:27.394 "block_size": 4096, 00:07:27.394 "num_blocks": 38912, 00:07:27.394 "uuid": "3497fd8f-1dbe-4b15-9ed3-3b85656ea740", 00:07:27.394 "numa_id": -1, 00:07:27.394 "assigned_rate_limits": { 00:07:27.394 "rw_ios_per_sec": 0, 00:07:27.394 "rw_mbytes_per_sec": 0, 00:07:27.394 "r_mbytes_per_sec": 0, 00:07:27.394 "w_mbytes_per_sec": 0 00:07:27.394 }, 00:07:27.394 "claimed": false, 00:07:27.394 "zoned": false, 00:07:27.394 "supported_io_types": { 00:07:27.394 "read": true, 00:07:27.394 "write": true, 00:07:27.394 "unmap": true, 00:07:27.394 "flush": true, 00:07:27.394 "reset": true, 00:07:27.394 "nvme_admin": true, 00:07:27.394 "nvme_io": true, 00:07:27.394 "nvme_io_md": false, 00:07:27.394 "write_zeroes": true, 00:07:27.394 "zcopy": false, 00:07:27.394 "get_zone_info": false, 00:07:27.394 "zone_management": false, 00:07:27.394 "zone_append": false, 00:07:27.394 "compare": true, 00:07:27.394 "compare_and_write": true, 00:07:27.394 "abort": true, 00:07:27.394 "seek_hole": false, 00:07:27.394 "seek_data": false, 00:07:27.394 "copy": true, 00:07:27.394 "nvme_iov_md": false 00:07:27.394 }, 00:07:27.394 "memory_domains": [ 00:07:27.394 { 00:07:27.394 "dma_device_id": "system", 00:07:27.394 "dma_device_type": 1 00:07:27.394 } 00:07:27.394 ], 00:07:27.394 "driver_specific": { 00:07:27.394 "nvme": [ 00:07:27.394 { 00:07:27.394 "trid": { 00:07:27.394 "trtype": "TCP", 00:07:27.394 "adrfam": "IPv4", 00:07:27.394 "traddr": "10.0.0.3", 00:07:27.395 "trsvcid": "4420", 00:07:27.395 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:27.395 }, 00:07:27.395 "ctrlr_data": { 00:07:27.395 "cntlid": 1, 00:07:27.395 "vendor_id": "0x8086", 00:07:27.395 "model_number": "SPDK bdev Controller", 00:07:27.395 "serial_number": "SPDK0", 00:07:27.395 "firmware_revision": "25.01", 00:07:27.395 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:27.395 "oacs": { 00:07:27.395 "security": 0, 00:07:27.395 "format": 0, 00:07:27.395 "firmware": 0, 00:07:27.395 "ns_manage": 0 00:07:27.395 }, 00:07:27.395 "multi_ctrlr": true, 00:07:27.395 "ana_reporting": false 00:07:27.395 }, 00:07:27.395 "vs": { 00:07:27.395 "nvme_version": "1.3" 00:07:27.395 }, 00:07:27.395 "ns_data": { 00:07:27.395 "id": 1, 00:07:27.395 "can_share": true 00:07:27.395 } 00:07:27.395 } 00:07:27.395 ], 00:07:27.395 "mp_policy": "active_passive" 00:07:27.395 } 00:07:27.395 } 00:07:27.395 ] 00:07:27.395 16:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63332 00:07:27.395 16:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:27.395 16:52:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:27.653 Running I/O for 10 seconds... 00:07:28.588 Latency(us) 00:07:28.589 [2024-11-20T16:52:47.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.589 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.589 Nvme0n1 : 1.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:07:28.589 [2024-11-20T16:52:47.125Z] =================================================================================================================== 00:07:28.589 [2024-11-20T16:52:47.125Z] Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:07:28.589 00:07:29.523 16:52:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7c03907d-712e-4e8f-929f-9f36c51f28ef 00:07:29.523 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.523 Nvme0n1 : 2.00 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:07:29.523 [2024-11-20T16:52:48.059Z] =================================================================================================================== 00:07:29.523 [2024-11-20T16:52:48.060Z] Total : 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:07:29.524 00:07:29.782 true 00:07:29.782 16:52:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:29.782 16:52:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7c03907d-712e-4e8f-929f-9f36c51f28ef 00:07:30.040 16:52:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:30.040 16:52:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:30.040 16:52:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63332 00:07:30.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.608 Nvme0n1 : 3.00 6773.33 26.46 0.00 0.00 0.00 0.00 0.00 00:07:30.608 [2024-11-20T16:52:49.144Z] =================================================================================================================== 00:07:30.608 [2024-11-20T16:52:49.144Z] Total : 6773.33 26.46 0.00 0.00 0.00 0.00 0.00 00:07:30.608 00:07:31.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.546 Nvme0n1 : 4.00 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:07:31.546 [2024-11-20T16:52:50.082Z] =================================================================================================================== 00:07:31.546 [2024-11-20T16:52:50.082Z] Total : 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:07:31.546 00:07:32.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.480 Nvme0n1 : 5.00 6832.60 26.69 0.00 0.00 0.00 0.00 0.00 00:07:32.480 [2024-11-20T16:52:51.016Z] =================================================================================================================== 00:07:32.480 [2024-11-20T16:52:51.016Z] Total : 6832.60 26.69 0.00 0.00 0.00 0.00 0.00 00:07:32.480 00:07:33.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.853 Nvme0n1 : 6.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:07:33.853 [2024-11-20T16:52:52.389Z] =================================================================================================================== 00:07:33.853 [2024-11-20T16:52:52.389Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:07:33.853 00:07:34.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.790 Nvme0n1 : 7.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:07:34.790 [2024-11-20T16:52:53.326Z] =================================================================================================================== 00:07:34.790 [2024-11-20T16:52:53.326Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:07:34.790 00:07:35.726 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.726 Nvme0n1 : 8.00 6873.88 26.85 0.00 0.00 0.00 0.00 0.00 00:07:35.726 [2024-11-20T16:52:54.262Z] =================================================================================================================== 00:07:35.726 [2024-11-20T16:52:54.262Z] Total : 6873.88 26.85 0.00 0.00 0.00 0.00 0.00 00:07:35.726 00:07:36.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.663 Nvme0n1 : 9.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:07:36.663 [2024-11-20T16:52:55.199Z] =================================================================================================================== 00:07:36.663 [2024-11-20T16:52:55.199Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:07:36.663 00:07:37.601 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.601 Nvme0n1 : 10.00 6845.30 26.74 0.00 0.00 0.00 0.00 0.00 00:07:37.601 [2024-11-20T16:52:56.137Z] =================================================================================================================== 00:07:37.601 [2024-11-20T16:52:56.137Z] Total : 6845.30 26.74 0.00 0.00 0.00 0.00 0.00 00:07:37.601 00:07:37.601 00:07:37.601 Latency(us) 00:07:37.601 [2024-11-20T16:52:56.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.601 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.601 Nvme0n1 : 10.01 6854.64 26.78 0.00 0.00 18667.86 15609.48 72923.69 00:07:37.601 [2024-11-20T16:52:56.137Z] =================================================================================================================== 00:07:37.601 [2024-11-20T16:52:56.137Z] Total : 6854.64 26.78 0.00 0.00 18667.86 15609.48 72923.69 00:07:37.601 { 00:07:37.601 "results": [ 00:07:37.601 { 00:07:37.601 "job": "Nvme0n1", 00:07:37.601 "core_mask": "0x2", 00:07:37.601 "workload": "randwrite", 00:07:37.601 "status": "finished", 00:07:37.601 "queue_depth": 128, 00:07:37.601 "io_size": 4096, 00:07:37.601 "runtime": 10.005044, 00:07:37.601 "iops": 6854.642518313763, 00:07:37.601 "mibps": 26.775947337163135, 00:07:37.601 "io_failed": 0, 00:07:37.601 "io_timeout": 0, 00:07:37.601 "avg_latency_us": 18667.858681452988, 00:07:37.601 "min_latency_us": 15609.483636363637, 00:07:37.601 "max_latency_us": 72923.69454545455 00:07:37.601 } 00:07:37.601 ], 00:07:37.601 "core_count": 1 00:07:37.601 } 00:07:37.601 16:52:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63308 00:07:37.601 16:52:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63308 ']' 00:07:37.601 16:52:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63308 00:07:37.601 16:52:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:37.601 16:52:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.601 16:52:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63308 00:07:37.601 16:52:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:37.601 16:52:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:37.601 killing process with pid 63308 00:07:37.602 16:52:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63308' 00:07:37.602 Received shutdown signal, test time was about 10.000000 seconds 00:07:37.602 00:07:37.602 Latency(us) 00:07:37.602 [2024-11-20T16:52:56.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.602 [2024-11-20T16:52:56.138Z] =================================================================================================================== 00:07:37.602 [2024-11-20T16:52:56.138Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:37.602 16:52:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63308 00:07:37.602 16:52:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63308 00:07:37.860 16:52:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:38.119 16:52:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:38.378 16:52:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7c03907d-712e-4e8f-929f-9f36c51f28ef 00:07:38.378 16:52:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:38.637 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:38.637 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:38.637 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:38.897 [2024-11-20 16:52:57.385405] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:38.897 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7c03907d-712e-4e8f-929f-9f36c51f28ef 00:07:38.897 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:38.897 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7c03907d-712e-4e8f-929f-9f36c51f28ef 00:07:38.897 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.897 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.897 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.897 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.897 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.897 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.897 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.897 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:38.897 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7c03907d-712e-4e8f-929f-9f36c51f28ef 00:07:39.157 request: 00:07:39.157 { 00:07:39.157 "uuid": "7c03907d-712e-4e8f-929f-9f36c51f28ef", 00:07:39.157 "method": "bdev_lvol_get_lvstores", 00:07:39.157 "req_id": 1 00:07:39.157 } 00:07:39.157 Got JSON-RPC error response 00:07:39.157 response: 00:07:39.157 { 00:07:39.157 "code": -19, 00:07:39.157 "message": "No such device" 00:07:39.157 } 00:07:39.416 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:39.416 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:39.416 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:39.416 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:39.416 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:39.675 aio_bdev 00:07:39.675 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3497fd8f-1dbe-4b15-9ed3-3b85656ea740 00:07:39.675 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=3497fd8f-1dbe-4b15-9ed3-3b85656ea740 00:07:39.675 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:39.675 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:39.675 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:39.675 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:39.675 16:52:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:39.935 16:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3497fd8f-1dbe-4b15-9ed3-3b85656ea740 -t 2000 00:07:40.194 [ 00:07:40.194 { 00:07:40.194 "name": "3497fd8f-1dbe-4b15-9ed3-3b85656ea740", 00:07:40.194 "aliases": [ 00:07:40.194 "lvs/lvol" 00:07:40.194 ], 00:07:40.194 "product_name": "Logical Volume", 00:07:40.194 "block_size": 4096, 00:07:40.194 "num_blocks": 38912, 00:07:40.194 "uuid": "3497fd8f-1dbe-4b15-9ed3-3b85656ea740", 00:07:40.194 "assigned_rate_limits": { 00:07:40.194 "rw_ios_per_sec": 0, 00:07:40.194 "rw_mbytes_per_sec": 0, 00:07:40.194 "r_mbytes_per_sec": 0, 00:07:40.194 "w_mbytes_per_sec": 0 00:07:40.194 }, 00:07:40.194 "claimed": false, 00:07:40.194 "zoned": false, 00:07:40.194 "supported_io_types": { 00:07:40.194 "read": true, 00:07:40.194 "write": true, 00:07:40.194 "unmap": true, 00:07:40.194 "flush": false, 00:07:40.194 "reset": true, 00:07:40.194 "nvme_admin": false, 00:07:40.194 "nvme_io": false, 00:07:40.194 "nvme_io_md": false, 00:07:40.194 "write_zeroes": true, 00:07:40.194 "zcopy": false, 00:07:40.194 "get_zone_info": false, 00:07:40.194 "zone_management": false, 00:07:40.194 "zone_append": false, 00:07:40.194 "compare": false, 00:07:40.194 "compare_and_write": false, 00:07:40.194 "abort": false, 00:07:40.194 "seek_hole": true, 00:07:40.194 "seek_data": true, 00:07:40.194 "copy": false, 00:07:40.194 "nvme_iov_md": false 00:07:40.194 }, 00:07:40.194 "driver_specific": { 00:07:40.194 "lvol": { 00:07:40.194 "lvol_store_uuid": "7c03907d-712e-4e8f-929f-9f36c51f28ef", 00:07:40.194 "base_bdev": "aio_bdev", 00:07:40.194 "thin_provision": false, 00:07:40.194 "num_allocated_clusters": 38, 00:07:40.194 "snapshot": false, 00:07:40.194 "clone": false, 00:07:40.194 "esnap_clone": false 00:07:40.194 } 00:07:40.194 } 00:07:40.194 } 00:07:40.194 ] 00:07:40.194 16:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:40.194 16:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7c03907d-712e-4e8f-929f-9f36c51f28ef 00:07:40.194 16:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:40.454 16:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:40.454 16:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:40.454 16:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7c03907d-712e-4e8f-929f-9f36c51f28ef 00:07:40.713 16:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:40.713 16:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3497fd8f-1dbe-4b15-9ed3-3b85656ea740 00:07:40.971 16:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7c03907d-712e-4e8f-929f-9f36c51f28ef 00:07:41.230 16:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:41.488 16:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:41.830 00:07:41.830 real 0m19.025s 00:07:41.830 user 0m18.144s 00:07:41.830 sys 0m2.585s 00:07:41.830 16:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.830 16:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:41.830 ************************************ 00:07:41.830 END TEST lvs_grow_clean 00:07:41.830 ************************************ 00:07:42.089 16:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:42.089 16:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:42.089 16:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.089 16:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.089 ************************************ 00:07:42.089 START TEST lvs_grow_dirty 00:07:42.089 ************************************ 00:07:42.089 16:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:42.090 16:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:42.090 16:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:42.090 16:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:42.090 16:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:42.090 16:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:42.090 16:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:42.090 16:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:42.090 16:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:42.090 16:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:42.350 16:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:42.350 16:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:42.608 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=32272334-42e1-4ea8-acd3-ebb876e266eb 00:07:42.609 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32272334-42e1-4ea8-acd3-ebb876e266eb 00:07:42.609 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:42.868 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:42.868 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:43.127 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 32272334-42e1-4ea8-acd3-ebb876e266eb lvol 150 00:07:43.386 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a32887a1-7643-4f79-b1dd-88ba62df9c2b 00:07:43.386 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:43.386 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:43.386 [2024-11-20 16:53:01.916056] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:43.386 [2024-11-20 16:53:01.916350] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:43.645 true 00:07:43.645 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32272334-42e1-4ea8-acd3-ebb876e266eb 00:07:43.645 16:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:43.904 16:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:43.904 16:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:44.163 16:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a32887a1-7643-4f79-b1dd-88ba62df9c2b 00:07:44.422 16:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:44.680 [2024-11-20 16:53:02.968598] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:44.680 16:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:44.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:44.940 16:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63593 00:07:44.940 16:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:44.940 16:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:44.940 16:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63593 /var/tmp/bdevperf.sock 00:07:44.940 16:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63593 ']' 00:07:44.940 16:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:44.940 16:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.940 16:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:44.940 16:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.940 16:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:44.940 [2024-11-20 16:53:03.332771] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:07:44.940 [2024-11-20 16:53:03.333059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63593 ] 00:07:45.199 [2024-11-20 16:53:03.479820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.199 [2024-11-20 16:53:03.541549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.199 [2024-11-20 16:53:03.599665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.788 16:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.788 16:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:45.788 16:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:46.363 Nvme0n1 00:07:46.363 16:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:46.622 [ 00:07:46.622 { 00:07:46.622 "name": "Nvme0n1", 00:07:46.622 "aliases": [ 00:07:46.622 "a32887a1-7643-4f79-b1dd-88ba62df9c2b" 00:07:46.622 ], 00:07:46.622 "product_name": "NVMe disk", 00:07:46.622 "block_size": 4096, 00:07:46.622 "num_blocks": 38912, 00:07:46.622 "uuid": "a32887a1-7643-4f79-b1dd-88ba62df9c2b", 00:07:46.622 "numa_id": -1, 00:07:46.622 "assigned_rate_limits": { 00:07:46.622 "rw_ios_per_sec": 0, 00:07:46.622 "rw_mbytes_per_sec": 0, 00:07:46.622 "r_mbytes_per_sec": 0, 00:07:46.622 "w_mbytes_per_sec": 0 00:07:46.622 }, 00:07:46.622 "claimed": false, 00:07:46.622 "zoned": false, 00:07:46.622 "supported_io_types": { 00:07:46.622 "read": true, 00:07:46.622 "write": true, 00:07:46.622 "unmap": true, 00:07:46.622 "flush": true, 00:07:46.622 "reset": true, 00:07:46.622 "nvme_admin": true, 00:07:46.622 "nvme_io": true, 00:07:46.622 "nvme_io_md": false, 00:07:46.622 "write_zeroes": true, 00:07:46.622 "zcopy": false, 00:07:46.622 "get_zone_info": false, 00:07:46.622 "zone_management": false, 00:07:46.622 "zone_append": false, 00:07:46.622 "compare": true, 00:07:46.622 "compare_and_write": true, 00:07:46.622 "abort": true, 00:07:46.622 "seek_hole": false, 00:07:46.622 "seek_data": false, 00:07:46.622 "copy": true, 00:07:46.622 "nvme_iov_md": false 00:07:46.622 }, 00:07:46.622 "memory_domains": [ 00:07:46.622 { 00:07:46.622 "dma_device_id": "system", 00:07:46.622 "dma_device_type": 1 00:07:46.622 } 00:07:46.622 ], 00:07:46.622 "driver_specific": { 00:07:46.622 "nvme": [ 00:07:46.622 { 00:07:46.622 "trid": { 00:07:46.622 "trtype": "TCP", 00:07:46.622 "adrfam": "IPv4", 00:07:46.622 "traddr": "10.0.0.3", 00:07:46.622 "trsvcid": "4420", 00:07:46.622 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:46.622 }, 00:07:46.622 "ctrlr_data": { 00:07:46.622 "cntlid": 1, 00:07:46.622 "vendor_id": "0x8086", 00:07:46.622 "model_number": "SPDK bdev Controller", 00:07:46.622 "serial_number": "SPDK0", 00:07:46.622 "firmware_revision": "25.01", 00:07:46.622 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:46.622 "oacs": { 00:07:46.622 "security": 0, 00:07:46.622 "format": 0, 00:07:46.622 "firmware": 0, 00:07:46.622 "ns_manage": 0 00:07:46.622 }, 00:07:46.622 "multi_ctrlr": true, 00:07:46.622 "ana_reporting": false 00:07:46.622 }, 00:07:46.622 "vs": { 00:07:46.622 "nvme_version": "1.3" 00:07:46.622 }, 00:07:46.622 "ns_data": { 00:07:46.622 "id": 1, 00:07:46.622 "can_share": true 00:07:46.622 } 00:07:46.622 } 00:07:46.622 ], 00:07:46.622 "mp_policy": "active_passive" 00:07:46.622 } 00:07:46.622 } 00:07:46.622 ] 00:07:46.622 16:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63617 00:07:46.622 16:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:46.622 16:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:46.622 Running I/O for 10 seconds... 00:07:47.558 Latency(us) 00:07:47.558 [2024-11-20T16:53:06.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.558 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.558 Nvme0n1 : 1.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:07:47.558 [2024-11-20T16:53:06.094Z] =================================================================================================================== 00:07:47.558 [2024-11-20T16:53:06.094Z] Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:07:47.558 00:07:48.495 16:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 32272334-42e1-4ea8-acd3-ebb876e266eb 00:07:48.754 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.754 Nvme0n1 : 2.00 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:07:48.754 [2024-11-20T16:53:07.290Z] =================================================================================================================== 00:07:48.754 [2024-11-20T16:53:07.290Z] Total : 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:07:48.754 00:07:48.754 true 00:07:48.754 16:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32272334-42e1-4ea8-acd3-ebb876e266eb 00:07:48.754 16:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:49.321 16:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:49.321 16:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:49.321 16:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63617 00:07:49.581 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.581 Nvme0n1 : 3.00 7281.33 28.44 0.00 0.00 0.00 0.00 0.00 00:07:49.581 [2024-11-20T16:53:08.117Z] =================================================================================================================== 00:07:49.581 [2024-11-20T16:53:08.117Z] Total : 7281.33 28.44 0.00 0.00 0.00 0.00 0.00 00:07:49.581 00:07:50.958 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.958 Nvme0n1 : 4.00 7270.75 28.40 0.00 0.00 0.00 0.00 0.00 00:07:50.958 [2024-11-20T16:53:09.494Z] =================================================================================================================== 00:07:50.958 [2024-11-20T16:53:09.494Z] Total : 7270.75 28.40 0.00 0.00 0.00 0.00 0.00 00:07:50.958 00:07:51.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.896 Nvme0n1 : 5.00 6935.00 27.09 0.00 0.00 0.00 0.00 0.00 00:07:51.896 [2024-11-20T16:53:10.432Z] =================================================================================================================== 00:07:51.896 [2024-11-20T16:53:10.432Z] Total : 6935.00 27.09 0.00 0.00 0.00 0.00 0.00 00:07:51.896 00:07:52.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.832 Nvme0n1 : 6.00 6964.50 27.21 0.00 0.00 0.00 0.00 0.00 00:07:52.832 [2024-11-20T16:53:11.368Z] =================================================================================================================== 00:07:52.832 [2024-11-20T16:53:11.368Z] Total : 6964.50 27.21 0.00 0.00 0.00 0.00 0.00 00:07:52.832 00:07:53.768 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.768 Nvme0n1 : 7.00 6985.57 27.29 0.00 0.00 0.00 0.00 0.00 00:07:53.768 [2024-11-20T16:53:12.304Z] =================================================================================================================== 00:07:53.768 [2024-11-20T16:53:12.304Z] Total : 6985.57 27.29 0.00 0.00 0.00 0.00 0.00 00:07:53.768 00:07:54.705 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.705 Nvme0n1 : 8.00 6985.50 27.29 0.00 0.00 0.00 0.00 0.00 00:07:54.705 [2024-11-20T16:53:13.241Z] =================================================================================================================== 00:07:54.705 [2024-11-20T16:53:13.241Z] Total : 6985.50 27.29 0.00 0.00 0.00 0.00 0.00 00:07:54.705 00:07:55.640 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.640 Nvme0n1 : 9.00 6985.44 27.29 0.00 0.00 0.00 0.00 0.00 00:07:55.640 [2024-11-20T16:53:14.176Z] =================================================================================================================== 00:07:55.640 [2024-11-20T16:53:14.176Z] Total : 6985.44 27.29 0.00 0.00 0.00 0.00 0.00 00:07:55.640 00:07:56.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.577 Nvme0n1 : 10.00 6985.40 27.29 0.00 0.00 0.00 0.00 0.00 00:07:56.577 [2024-11-20T16:53:15.113Z] =================================================================================================================== 00:07:56.577 [2024-11-20T16:53:15.113Z] Total : 6985.40 27.29 0.00 0.00 0.00 0.00 0.00 00:07:56.577 00:07:56.577 00:07:56.577 Latency(us) 00:07:56.577 [2024-11-20T16:53:15.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.577 Nvme0n1 : 10.01 6991.43 27.31 0.00 0.00 18302.21 12332.68 228780.22 00:07:56.577 [2024-11-20T16:53:15.113Z] =================================================================================================================== 00:07:56.577 [2024-11-20T16:53:15.113Z] Total : 6991.43 27.31 0.00 0.00 18302.21 12332.68 228780.22 00:07:56.577 { 00:07:56.577 "results": [ 00:07:56.577 { 00:07:56.577 "job": "Nvme0n1", 00:07:56.577 "core_mask": "0x2", 00:07:56.577 "workload": "randwrite", 00:07:56.577 "status": "finished", 00:07:56.577 "queue_depth": 128, 00:07:56.577 "io_size": 4096, 00:07:56.577 "runtime": 10.009689, 00:07:56.577 "iops": 6991.426007341487, 00:07:56.577 "mibps": 27.310257841177684, 00:07:56.577 "io_failed": 0, 00:07:56.577 "io_timeout": 0, 00:07:56.577 "avg_latency_us": 18302.205982733223, 00:07:56.577 "min_latency_us": 12332.683636363636, 00:07:56.577 "max_latency_us": 228780.21818181817 00:07:56.578 } 00:07:56.578 ], 00:07:56.578 "core_count": 1 00:07:56.578 } 00:07:56.840 16:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63593 00:07:56.840 16:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63593 ']' 00:07:56.840 16:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63593 00:07:56.840 16:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:56.840 16:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.840 16:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63593 00:07:56.840 killing process with pid 63593 00:07:56.840 Received shutdown signal, test time was about 10.000000 seconds 00:07:56.840 00:07:56.840 Latency(us) 00:07:56.840 [2024-11-20T16:53:15.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.840 [2024-11-20T16:53:15.376Z] =================================================================================================================== 00:07:56.840 [2024-11-20T16:53:15.376Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:56.840 16:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:56.840 16:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:56.840 16:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63593' 00:07:56.840 16:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63593 00:07:56.840 16:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63593 00:07:56.840 16:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:57.409 16:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:57.409 16:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32272334-42e1-4ea8-acd3-ebb876e266eb 00:07:57.409 16:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:57.668 16:53:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:57.668 16:53:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:57.668 16:53:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63220 00:07:57.668 16:53:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63220 00:07:57.668 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63220 Killed "${NVMF_APP[@]}" "$@" 00:07:57.668 16:53:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:57.668 16:53:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:57.668 16:53:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:57.668 16:53:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:57.668 16:53:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:57.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.668 16:53:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63755 00:07:57.668 16:53:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:57.668 16:53:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63755 00:07:57.668 16:53:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63755 ']' 00:07:57.668 16:53:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.668 16:53:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.668 16:53:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.668 16:53:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.668 16:53:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:57.927 [2024-11-20 16:53:16.259272] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:07:57.927 [2024-11-20 16:53:16.259612] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.927 [2024-11-20 16:53:16.415065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.186 [2024-11-20 16:53:16.475790] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:58.186 [2024-11-20 16:53:16.476107] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:58.186 [2024-11-20 16:53:16.476252] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:58.186 [2024-11-20 16:53:16.476379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:58.186 [2024-11-20 16:53:16.476397] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:58.186 [2024-11-20 16:53:16.476785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.186 [2024-11-20 16:53:16.531160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.755 16:53:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.755 16:53:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:58.755 16:53:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:58.755 16:53:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:58.755 16:53:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:58.755 16:53:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.755 16:53:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:59.322 [2024-11-20 16:53:17.555938] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:59.322 [2024-11-20 16:53:17.556415] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:59.322 [2024-11-20 16:53:17.556745] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:59.322 16:53:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:59.322 16:53:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a32887a1-7643-4f79-b1dd-88ba62df9c2b 00:07:59.322 16:53:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a32887a1-7643-4f79-b1dd-88ba62df9c2b 00:07:59.322 16:53:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:59.322 16:53:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:59.322 16:53:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:59.322 16:53:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:59.322 16:53:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:59.580 16:53:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a32887a1-7643-4f79-b1dd-88ba62df9c2b -t 2000 00:07:59.839 [ 00:07:59.839 { 00:07:59.839 "name": "a32887a1-7643-4f79-b1dd-88ba62df9c2b", 00:07:59.839 "aliases": [ 00:07:59.839 "lvs/lvol" 00:07:59.839 ], 00:07:59.839 "product_name": "Logical Volume", 00:07:59.839 "block_size": 4096, 00:07:59.839 "num_blocks": 38912, 00:07:59.839 "uuid": "a32887a1-7643-4f79-b1dd-88ba62df9c2b", 00:07:59.839 "assigned_rate_limits": { 00:07:59.839 "rw_ios_per_sec": 0, 00:07:59.839 "rw_mbytes_per_sec": 0, 00:07:59.839 "r_mbytes_per_sec": 0, 00:07:59.839 "w_mbytes_per_sec": 0 00:07:59.839 }, 00:07:59.839 "claimed": false, 00:07:59.839 "zoned": false, 00:07:59.839 "supported_io_types": { 00:07:59.839 "read": true, 00:07:59.839 "write": true, 00:07:59.839 "unmap": true, 00:07:59.839 "flush": false, 00:07:59.839 "reset": true, 00:07:59.839 "nvme_admin": false, 00:07:59.839 "nvme_io": false, 00:07:59.839 "nvme_io_md": false, 00:07:59.839 "write_zeroes": true, 00:07:59.839 "zcopy": false, 00:07:59.839 "get_zone_info": false, 00:07:59.839 "zone_management": false, 00:07:59.839 "zone_append": false, 00:07:59.839 "compare": false, 00:07:59.839 "compare_and_write": false, 00:07:59.839 "abort": false, 00:07:59.839 "seek_hole": true, 00:07:59.839 "seek_data": true, 00:07:59.839 "copy": false, 00:07:59.839 "nvme_iov_md": false 00:07:59.839 }, 00:07:59.839 "driver_specific": { 00:07:59.839 "lvol": { 00:07:59.839 "lvol_store_uuid": "32272334-42e1-4ea8-acd3-ebb876e266eb", 00:07:59.839 "base_bdev": "aio_bdev", 00:07:59.839 "thin_provision": false, 00:07:59.839 "num_allocated_clusters": 38, 00:07:59.839 "snapshot": false, 00:07:59.839 "clone": false, 00:07:59.839 "esnap_clone": false 00:07:59.839 } 00:07:59.839 } 00:07:59.839 } 00:07:59.839 ] 00:07:59.839 16:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:59.839 16:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32272334-42e1-4ea8-acd3-ebb876e266eb 00:07:59.839 16:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:00.161 16:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:00.161 16:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32272334-42e1-4ea8-acd3-ebb876e266eb 00:08:00.161 16:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:00.422 16:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:00.422 16:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:00.681 [2024-11-20 16:53:19.169796] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:00.939 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32272334-42e1-4ea8-acd3-ebb876e266eb 00:08:00.939 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:00.939 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32272334-42e1-4ea8-acd3-ebb876e266eb 00:08:00.939 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.939 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:00.939 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.939 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:00.939 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.939 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:00.939 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.939 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:00.939 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32272334-42e1-4ea8-acd3-ebb876e266eb 00:08:01.197 request: 00:08:01.197 { 00:08:01.197 "uuid": "32272334-42e1-4ea8-acd3-ebb876e266eb", 00:08:01.197 "method": "bdev_lvol_get_lvstores", 00:08:01.197 "req_id": 1 00:08:01.197 } 00:08:01.197 Got JSON-RPC error response 00:08:01.197 response: 00:08:01.197 { 00:08:01.197 "code": -19, 00:08:01.197 "message": "No such device" 00:08:01.197 } 00:08:01.197 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:01.197 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:01.197 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:01.197 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:01.197 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:01.456 aio_bdev 00:08:01.456 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a32887a1-7643-4f79-b1dd-88ba62df9c2b 00:08:01.456 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a32887a1-7643-4f79-b1dd-88ba62df9c2b 00:08:01.456 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:01.456 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:01.456 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:01.456 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:01.456 16:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:01.715 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a32887a1-7643-4f79-b1dd-88ba62df9c2b -t 2000 00:08:01.973 [ 00:08:01.973 { 00:08:01.973 "name": "a32887a1-7643-4f79-b1dd-88ba62df9c2b", 00:08:01.973 "aliases": [ 00:08:01.973 "lvs/lvol" 00:08:01.973 ], 00:08:01.973 "product_name": "Logical Volume", 00:08:01.973 "block_size": 4096, 00:08:01.973 "num_blocks": 38912, 00:08:01.973 "uuid": "a32887a1-7643-4f79-b1dd-88ba62df9c2b", 00:08:01.973 "assigned_rate_limits": { 00:08:01.973 "rw_ios_per_sec": 0, 00:08:01.973 "rw_mbytes_per_sec": 0, 00:08:01.973 "r_mbytes_per_sec": 0, 00:08:01.973 "w_mbytes_per_sec": 0 00:08:01.973 }, 00:08:01.973 "claimed": false, 00:08:01.973 "zoned": false, 00:08:01.973 "supported_io_types": { 00:08:01.973 "read": true, 00:08:01.973 "write": true, 00:08:01.973 "unmap": true, 00:08:01.973 "flush": false, 00:08:01.973 "reset": true, 00:08:01.973 "nvme_admin": false, 00:08:01.973 "nvme_io": false, 00:08:01.973 "nvme_io_md": false, 00:08:01.973 "write_zeroes": true, 00:08:01.973 "zcopy": false, 00:08:01.973 "get_zone_info": false, 00:08:01.973 "zone_management": false, 00:08:01.973 "zone_append": false, 00:08:01.973 "compare": false, 00:08:01.973 "compare_and_write": false, 00:08:01.973 "abort": false, 00:08:01.973 "seek_hole": true, 00:08:01.973 "seek_data": true, 00:08:01.973 "copy": false, 00:08:01.973 "nvme_iov_md": false 00:08:01.973 }, 00:08:01.973 "driver_specific": { 00:08:01.973 "lvol": { 00:08:01.973 "lvol_store_uuid": "32272334-42e1-4ea8-acd3-ebb876e266eb", 00:08:01.973 "base_bdev": "aio_bdev", 00:08:01.973 "thin_provision": false, 00:08:01.973 "num_allocated_clusters": 38, 00:08:01.973 "snapshot": false, 00:08:01.973 "clone": false, 00:08:01.973 "esnap_clone": false 00:08:01.973 } 00:08:01.973 } 00:08:01.973 } 00:08:01.973 ] 00:08:01.973 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:01.973 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32272334-42e1-4ea8-acd3-ebb876e266eb 00:08:01.973 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:02.232 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:02.232 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32272334-42e1-4ea8-acd3-ebb876e266eb 00:08:02.232 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:02.491 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:02.491 16:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a32887a1-7643-4f79-b1dd-88ba62df9c2b 00:08:03.059 16:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 32272334-42e1-4ea8-acd3-ebb876e266eb 00:08:03.317 16:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:03.575 16:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:04.177 00:08:04.177 real 0m22.007s 00:08:04.177 user 0m45.671s 00:08:04.177 sys 0m8.157s 00:08:04.177 ************************************ 00:08:04.177 END TEST lvs_grow_dirty 00:08:04.177 ************************************ 00:08:04.177 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.177 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:04.177 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:04.177 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:04.177 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:04.177 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:04.177 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:04.177 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:04.177 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:04.177 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:04.177 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:04.177 nvmf_trace.0 00:08:04.177 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:04.177 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:04.177 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:04.177 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:04.453 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:04.453 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:04.453 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:04.453 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:04.453 rmmod nvme_tcp 00:08:04.453 rmmod nvme_fabrics 00:08:04.453 rmmod nvme_keyring 00:08:04.453 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:04.453 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:04.453 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:04.453 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63755 ']' 00:08:04.453 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63755 00:08:04.453 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63755 ']' 00:08:04.453 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63755 00:08:04.453 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:04.453 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.453 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63755 00:08:04.453 killing process with pid 63755 00:08:04.453 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.453 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.453 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63755' 00:08:04.453 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63755 00:08:04.453 16:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63755 00:08:04.712 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:04.712 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:04.712 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:04.712 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:04.712 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:04.712 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:04.712 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:04.712 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:04.712 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:04.712 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:04.712 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:04.712 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:04.712 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:04.712 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:04.712 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:04.712 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:04.712 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:04.712 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:04.971 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:04.971 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:04.971 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:04.971 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:04.971 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:04.971 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.971 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.971 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.971 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:08:04.971 ************************************ 00:08:04.971 END TEST nvmf_lvs_grow 00:08:04.971 ************************************ 00:08:04.971 00:08:04.971 real 0m44.023s 00:08:04.971 user 1m11.443s 00:08:04.972 sys 0m11.690s 00:08:04.972 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.972 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:04.972 16:53:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:04.972 16:53:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:04.972 16:53:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.972 16:53:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:04.972 ************************************ 00:08:04.972 START TEST nvmf_bdev_io_wait 00:08:04.972 ************************************ 00:08:04.972 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:04.972 * Looking for test storage... 00:08:05.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:05.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.232 --rc genhtml_branch_coverage=1 00:08:05.232 --rc genhtml_function_coverage=1 00:08:05.232 --rc genhtml_legend=1 00:08:05.232 --rc geninfo_all_blocks=1 00:08:05.232 --rc geninfo_unexecuted_blocks=1 00:08:05.232 00:08:05.232 ' 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:05.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.232 --rc genhtml_branch_coverage=1 00:08:05.232 --rc genhtml_function_coverage=1 00:08:05.232 --rc genhtml_legend=1 00:08:05.232 --rc geninfo_all_blocks=1 00:08:05.232 --rc geninfo_unexecuted_blocks=1 00:08:05.232 00:08:05.232 ' 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:05.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.232 --rc genhtml_branch_coverage=1 00:08:05.232 --rc genhtml_function_coverage=1 00:08:05.232 --rc genhtml_legend=1 00:08:05.232 --rc geninfo_all_blocks=1 00:08:05.232 --rc geninfo_unexecuted_blocks=1 00:08:05.232 00:08:05.232 ' 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:05.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.232 --rc genhtml_branch_coverage=1 00:08:05.232 --rc genhtml_function_coverage=1 00:08:05.232 --rc genhtml_legend=1 00:08:05.232 --rc geninfo_all_blocks=1 00:08:05.232 --rc geninfo_unexecuted_blocks=1 00:08:05.232 00:08:05.232 ' 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.232 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:05.233 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:05.233 Cannot find device "nvmf_init_br" 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:05.233 Cannot find device "nvmf_init_br2" 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:05.233 Cannot find device "nvmf_tgt_br" 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:05.233 Cannot find device "nvmf_tgt_br2" 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:05.233 Cannot find device "nvmf_init_br" 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:05.233 Cannot find device "nvmf_init_br2" 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:05.233 Cannot find device "nvmf_tgt_br" 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:05.233 Cannot find device "nvmf_tgt_br2" 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:05.233 Cannot find device "nvmf_br" 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:08:05.233 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:05.493 Cannot find device "nvmf_init_if" 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:05.493 Cannot find device "nvmf_init_if2" 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:05.493 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:05.493 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:05.493 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:05.494 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:05.494 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:05.494 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:05.494 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:05.494 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:05.494 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:05.494 16:53:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:05.494 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:05.494 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:05.494 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:05.494 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:05.494 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:05.494 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:05.494 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:05.494 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:05.494 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:08:05.494 00:08:05.494 --- 10.0.0.3 ping statistics --- 00:08:05.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.494 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:08:05.494 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:05.753 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:05.753 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:08:05.753 00:08:05.753 --- 10.0.0.4 ping statistics --- 00:08:05.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.753 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:08:05.753 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:05.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:05.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:08:05.753 00:08:05.753 --- 10.0.0.1 ping statistics --- 00:08:05.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.753 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:05.753 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:05.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:05.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:08:05.753 00:08:05.753 --- 10.0.0.2 ping statistics --- 00:08:05.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.753 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:08:05.753 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:05.753 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:08:05.753 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:05.753 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:05.753 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:05.753 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:05.753 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:05.753 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:05.753 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:05.753 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:05.753 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:05.753 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:05.753 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:05.753 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64135 00:08:05.753 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:05.753 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64135 00:08:05.753 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 64135 ']' 00:08:05.753 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.753 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.753 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.753 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.753 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:05.753 [2024-11-20 16:53:24.136509] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:08:05.753 [2024-11-20 16:53:24.136930] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.012 [2024-11-20 16:53:24.292218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.012 [2024-11-20 16:53:24.378962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.012 [2024-11-20 16:53:24.379378] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.012 [2024-11-20 16:53:24.379646] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.012 [2024-11-20 16:53:24.380016] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.012 [2024-11-20 16:53:24.380271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.012 [2024-11-20 16:53:24.382281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.012 [2024-11-20 16:53:24.382432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.012 [2024-11-20 16:53:24.382514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.012 [2024-11-20 16:53:24.382515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.012 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.012 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:06.012 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:06.012 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:06.012 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.012 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.012 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:06.012 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.012 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.012 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.012 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:06.012 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.012 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.012 [2024-11-20 16:53:24.537248] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.272 [2024-11-20 16:53:24.553814] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.272 Malloc0 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.272 [2024-11-20 16:53:24.610665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64168 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64170 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:06.272 { 00:08:06.272 "params": { 00:08:06.272 "name": "Nvme$subsystem", 00:08:06.272 "trtype": "$TEST_TRANSPORT", 00:08:06.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:06.272 "adrfam": "ipv4", 00:08:06.272 "trsvcid": "$NVMF_PORT", 00:08:06.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:06.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:06.272 "hdgst": ${hdgst:-false}, 00:08:06.272 "ddgst": ${ddgst:-false} 00:08:06.272 }, 00:08:06.272 "method": "bdev_nvme_attach_controller" 00:08:06.272 } 00:08:06.272 EOF 00:08:06.272 )") 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64172 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:06.272 { 00:08:06.272 "params": { 00:08:06.272 "name": "Nvme$subsystem", 00:08:06.272 "trtype": "$TEST_TRANSPORT", 00:08:06.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:06.272 "adrfam": "ipv4", 00:08:06.272 "trsvcid": "$NVMF_PORT", 00:08:06.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:06.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:06.272 "hdgst": ${hdgst:-false}, 00:08:06.272 "ddgst": ${ddgst:-false} 00:08:06.272 }, 00:08:06.272 "method": "bdev_nvme_attach_controller" 00:08:06.272 } 00:08:06.272 EOF 00:08:06.272 )") 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:06.272 { 00:08:06.272 "params": { 00:08:06.272 "name": "Nvme$subsystem", 00:08:06.272 "trtype": "$TEST_TRANSPORT", 00:08:06.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:06.272 "adrfam": "ipv4", 00:08:06.272 "trsvcid": "$NVMF_PORT", 00:08:06.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:06.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:06.272 "hdgst": ${hdgst:-false}, 00:08:06.272 "ddgst": ${ddgst:-false} 00:08:06.272 }, 00:08:06.272 "method": "bdev_nvme_attach_controller" 00:08:06.272 } 00:08:06.272 EOF 00:08:06.272 )") 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64175 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:06.272 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:06.273 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:06.273 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:06.273 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:06.273 { 00:08:06.273 "params": { 00:08:06.273 "name": "Nvme$subsystem", 00:08:06.273 "trtype": "$TEST_TRANSPORT", 00:08:06.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:06.273 "adrfam": "ipv4", 00:08:06.273 "trsvcid": "$NVMF_PORT", 00:08:06.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:06.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:06.273 "hdgst": ${hdgst:-false}, 00:08:06.273 "ddgst": ${ddgst:-false} 00:08:06.273 }, 00:08:06.273 "method": "bdev_nvme_attach_controller" 00:08:06.273 } 00:08:06.273 EOF 00:08:06.273 )") 00:08:06.273 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:06.273 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:06.273 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:06.273 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:06.273 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:06.273 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:06.273 "params": { 00:08:06.273 "name": "Nvme1", 00:08:06.273 "trtype": "tcp", 00:08:06.273 "traddr": "10.0.0.3", 00:08:06.273 "adrfam": "ipv4", 00:08:06.273 "trsvcid": "4420", 00:08:06.273 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:06.273 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:06.273 "hdgst": false, 00:08:06.273 "ddgst": false 00:08:06.273 }, 00:08:06.273 "method": "bdev_nvme_attach_controller" 00:08:06.273 }' 00:08:06.273 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:06.273 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:06.273 "params": { 00:08:06.273 "name": "Nvme1", 00:08:06.273 "trtype": "tcp", 00:08:06.273 "traddr": "10.0.0.3", 00:08:06.273 "adrfam": "ipv4", 00:08:06.273 "trsvcid": "4420", 00:08:06.273 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:06.273 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:06.273 "hdgst": false, 00:08:06.273 "ddgst": false 00:08:06.273 }, 00:08:06.273 "method": "bdev_nvme_attach_controller" 00:08:06.273 }' 00:08:06.273 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:06.273 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:06.273 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:06.273 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:06.273 "params": { 00:08:06.273 "name": "Nvme1", 00:08:06.273 "trtype": "tcp", 00:08:06.273 "traddr": "10.0.0.3", 00:08:06.273 "adrfam": "ipv4", 00:08:06.273 "trsvcid": "4420", 00:08:06.273 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:06.273 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:06.273 "hdgst": false, 00:08:06.273 "ddgst": false 00:08:06.273 }, 00:08:06.273 "method": "bdev_nvme_attach_controller" 00:08:06.273 }' 00:08:06.273 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:06.273 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:06.273 "params": { 00:08:06.273 "name": "Nvme1", 00:08:06.273 "trtype": "tcp", 00:08:06.273 "traddr": "10.0.0.3", 00:08:06.273 "adrfam": "ipv4", 00:08:06.273 "trsvcid": "4420", 00:08:06.273 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:06.273 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:06.273 "hdgst": false, 00:08:06.273 "ddgst": false 00:08:06.273 }, 00:08:06.273 "method": "bdev_nvme_attach_controller" 00:08:06.273 }' 00:08:06.273 [2024-11-20 16:53:24.667735] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:08:06.273 [2024-11-20 16:53:24.668693] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:06.273 [2024-11-20 16:53:24.677375] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:08:06.273 [2024-11-20 16:53:24.677578] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:06.273 16:53:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64168 00:08:06.273 [2024-11-20 16:53:24.701322] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:08:06.273 [2024-11-20 16:53:24.702214] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:06.273 [2024-11-20 16:53:24.706131] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:08:06.273 [2024-11-20 16:53:24.706213] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:06.532 [2024-11-20 16:53:24.882396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.532 [2024-11-20 16:53:24.934715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:06.532 [2024-11-20 16:53:24.948093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.532 [2024-11-20 16:53:24.956281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.532 [2024-11-20 16:53:25.006984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:06.532 [2024-11-20 16:53:25.019931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.792 Running I/O for 1 seconds... 00:08:06.792 [2024-11-20 16:53:25.108927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.792 [2024-11-20 16:53:25.124048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.792 Running I/O for 1 seconds... 00:08:06.792 [2024-11-20 16:53:25.179214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:06.792 [2024-11-20 16:53:25.188280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:06.792 [2024-11-20 16:53:25.194330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.792 [2024-11-20 16:53:25.202350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.050 Running I/O for 1 seconds... 00:08:07.050 Running I/O for 1 seconds... 00:08:07.619 10934.00 IOPS, 42.71 MiB/s 00:08:07.619 Latency(us) 00:08:07.619 [2024-11-20T16:53:26.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.619 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:07.619 Nvme1n1 : 1.01 10985.13 42.91 0.00 0.00 11606.03 7089.80 17992.61 00:08:07.619 [2024-11-20T16:53:26.155Z] =================================================================================================================== 00:08:07.619 [2024-11-20T16:53:26.155Z] Total : 10985.13 42.91 0.00 0.00 11606.03 7089.80 17992.61 00:08:07.928 7931.00 IOPS, 30.98 MiB/s 00:08:07.928 Latency(us) 00:08:07.928 [2024-11-20T16:53:26.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.928 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:07.928 Nvme1n1 : 1.01 7985.01 31.19 0.00 0.00 15941.81 6911.07 26214.40 00:08:07.928 [2024-11-20T16:53:26.464Z] =================================================================================================================== 00:08:07.928 [2024-11-20T16:53:26.464Z] Total : 7985.01 31.19 0.00 0.00 15941.81 6911.07 26214.40 00:08:07.928 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64170 00:08:07.928 7715.00 IOPS, 30.14 MiB/s 00:08:07.928 Latency(us) 00:08:07.928 [2024-11-20T16:53:26.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.928 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:07.928 Nvme1n1 : 1.01 7764.63 30.33 0.00 0.00 16394.76 8400.52 25976.09 00:08:07.928 [2024-11-20T16:53:26.464Z] =================================================================================================================== 00:08:07.928 [2024-11-20T16:53:26.464Z] Total : 7764.63 30.33 0.00 0.00 16394.76 8400.52 25976.09 00:08:07.928 162728.00 IOPS, 635.66 MiB/s 00:08:07.928 Latency(us) 00:08:07.928 [2024-11-20T16:53:26.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.928 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:07.928 Nvme1n1 : 1.00 162363.86 634.23 0.00 0.00 783.74 389.12 2189.50 00:08:07.928 [2024-11-20T16:53:26.464Z] =================================================================================================================== 00:08:07.928 [2024-11-20T16:53:26.464Z] Total : 162363.86 634.23 0.00 0.00 783.74 389.12 2189.50 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64172 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64175 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:08.187 rmmod nvme_tcp 00:08:08.187 rmmod nvme_fabrics 00:08:08.187 rmmod nvme_keyring 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64135 ']' 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64135 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 64135 ']' 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 64135 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64135 00:08:08.187 killing process with pid 64135 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64135' 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 64135 00:08:08.187 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 64135 00:08:08.446 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:08.446 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:08.446 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:08.446 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:08.446 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:08.447 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:08.447 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:08.447 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:08.447 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:08.447 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:08.447 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:08.447 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:08.447 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:08.447 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:08.447 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:08.447 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:08.447 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:08.447 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:08.447 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:08.705 16:53:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:08.705 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:08.705 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:08.705 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:08.705 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.705 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.705 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.705 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:08:08.705 00:08:08.705 real 0m3.658s 00:08:08.705 user 0m14.380s 00:08:08.705 sys 0m2.324s 00:08:08.705 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.705 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:08.705 ************************************ 00:08:08.705 END TEST nvmf_bdev_io_wait 00:08:08.705 ************************************ 00:08:08.705 16:53:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:08.705 16:53:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:08.705 16:53:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.705 16:53:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:08.705 ************************************ 00:08:08.705 START TEST nvmf_queue_depth 00:08:08.705 ************************************ 00:08:08.705 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:08.705 * Looking for test storage... 00:08:08.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:08.705 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:08.705 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:08.706 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:08.965 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:08.965 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.965 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.965 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.965 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:08.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.966 --rc genhtml_branch_coverage=1 00:08:08.966 --rc genhtml_function_coverage=1 00:08:08.966 --rc genhtml_legend=1 00:08:08.966 --rc geninfo_all_blocks=1 00:08:08.966 --rc geninfo_unexecuted_blocks=1 00:08:08.966 00:08:08.966 ' 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:08.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.966 --rc genhtml_branch_coverage=1 00:08:08.966 --rc genhtml_function_coverage=1 00:08:08.966 --rc genhtml_legend=1 00:08:08.966 --rc geninfo_all_blocks=1 00:08:08.966 --rc geninfo_unexecuted_blocks=1 00:08:08.966 00:08:08.966 ' 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:08.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.966 --rc genhtml_branch_coverage=1 00:08:08.966 --rc genhtml_function_coverage=1 00:08:08.966 --rc genhtml_legend=1 00:08:08.966 --rc geninfo_all_blocks=1 00:08:08.966 --rc geninfo_unexecuted_blocks=1 00:08:08.966 00:08:08.966 ' 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:08.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.966 --rc genhtml_branch_coverage=1 00:08:08.966 --rc genhtml_function_coverage=1 00:08:08.966 --rc genhtml_legend=1 00:08:08.966 --rc geninfo_all_blocks=1 00:08:08.966 --rc geninfo_unexecuted_blocks=1 00:08:08.966 00:08:08.966 ' 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:08.966 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:08.966 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:08.967 Cannot find device "nvmf_init_br" 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:08.967 Cannot find device "nvmf_init_br2" 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:08.967 Cannot find device "nvmf_tgt_br" 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:08.967 Cannot find device "nvmf_tgt_br2" 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:08.967 Cannot find device "nvmf_init_br" 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:08.967 Cannot find device "nvmf_init_br2" 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:08.967 Cannot find device "nvmf_tgt_br" 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:08.967 Cannot find device "nvmf_tgt_br2" 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:08.967 Cannot find device "nvmf_br" 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:08.967 Cannot find device "nvmf_init_if" 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:08.967 Cannot find device "nvmf_init_if2" 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:08.967 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:08.967 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:08:08.967 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:09.226 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:09.226 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:08:09.226 00:08:09.226 --- 10.0.0.3 ping statistics --- 00:08:09.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.226 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:09.226 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:09.226 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:08:09.226 00:08:09.226 --- 10.0.0.4 ping statistics --- 00:08:09.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.226 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:09.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:09.226 00:08:09.226 --- 10.0.0.1 ping statistics --- 00:08:09.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.226 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:09.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:08:09.226 00:08:09.226 --- 10.0.0.2 ping statistics --- 00:08:09.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.226 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:09.226 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:09.485 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:09.485 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:09.485 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:09.485 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.485 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64434 00:08:09.485 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64434 00:08:09.485 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64434 ']' 00:08:09.485 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.485 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:09.485 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.485 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.485 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.485 16:53:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.485 [2024-11-20 16:53:27.839444] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:08:09.486 [2024-11-20 16:53:27.839539] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.486 [2024-11-20 16:53:27.996625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.744 [2024-11-20 16:53:28.083298] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.744 [2024-11-20 16:53:28.083665] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.744 [2024-11-20 16:53:28.083715] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.744 [2024-11-20 16:53:28.083735] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.744 [2024-11-20 16:53:28.083753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.744 [2024-11-20 16:53:28.084368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.744 [2024-11-20 16:53:28.146005] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.744 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.744 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:09.744 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:09.744 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:09.744 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.744 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.744 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:09.744 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.744 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.744 [2024-11-20 16:53:28.258353] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.744 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.744 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:09.744 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.744 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:10.003 Malloc0 00:08:10.003 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.003 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:10.003 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.003 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:10.003 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.003 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:10.003 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.003 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:10.003 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.003 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:10.003 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.003 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:10.003 [2024-11-20 16:53:28.311052] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:10.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:10.003 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.003 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64454 00:08:10.003 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:10.003 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:10.003 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64454 /var/tmp/bdevperf.sock 00:08:10.003 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64454 ']' 00:08:10.003 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:10.003 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.003 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:10.003 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.003 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:10.003 [2024-11-20 16:53:28.372570] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:08:10.003 [2024-11-20 16:53:28.373064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64454 ] 00:08:10.003 [2024-11-20 16:53:28.527611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.261 [2024-11-20 16:53:28.592839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.261 [2024-11-20 16:53:28.649783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.261 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.261 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:10.261 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:10.261 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.261 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:10.519 NVMe0n1 00:08:10.519 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.519 16:53:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:10.519 Running I/O for 10 seconds... 00:08:12.458 6552.00 IOPS, 25.59 MiB/s [2024-11-20T16:53:31.931Z] 7178.50 IOPS, 28.04 MiB/s [2024-11-20T16:53:33.308Z] 7532.00 IOPS, 29.42 MiB/s [2024-11-20T16:53:34.243Z] 7728.25 IOPS, 30.19 MiB/s [2024-11-20T16:53:35.178Z] 7799.00 IOPS, 30.46 MiB/s [2024-11-20T16:53:36.115Z] 7815.83 IOPS, 30.53 MiB/s [2024-11-20T16:53:37.151Z] 7901.00 IOPS, 30.86 MiB/s [2024-11-20T16:53:38.087Z] 8025.25 IOPS, 31.35 MiB/s [2024-11-20T16:53:39.022Z] 8066.33 IOPS, 31.51 MiB/s [2024-11-20T16:53:39.022Z] 8112.10 IOPS, 31.69 MiB/s 00:08:20.486 Latency(us) 00:08:20.486 [2024-11-20T16:53:39.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.486 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:20.486 Verification LBA range: start 0x0 length 0x4000 00:08:20.486 NVMe0n1 : 10.09 8142.57 31.81 0.00 0.00 125148.03 27882.59 94848.47 00:08:20.486 [2024-11-20T16:53:39.022Z] =================================================================================================================== 00:08:20.486 [2024-11-20T16:53:39.022Z] Total : 8142.57 31.81 0.00 0.00 125148.03 27882.59 94848.47 00:08:20.486 { 00:08:20.486 "results": [ 00:08:20.486 { 00:08:20.486 "job": "NVMe0n1", 00:08:20.486 "core_mask": "0x1", 00:08:20.486 "workload": "verify", 00:08:20.486 "status": "finished", 00:08:20.486 "verify_range": { 00:08:20.486 "start": 0, 00:08:20.486 "length": 16384 00:08:20.486 }, 00:08:20.486 "queue_depth": 1024, 00:08:20.486 "io_size": 4096, 00:08:20.486 "runtime": 10.086254, 00:08:20.486 "iops": 8142.567101720817, 00:08:20.486 "mibps": 31.80690274109694, 00:08:20.486 "io_failed": 0, 00:08:20.486 "io_timeout": 0, 00:08:20.486 "avg_latency_us": 125148.0347725059, 00:08:20.486 "min_latency_us": 27882.589090909092, 00:08:20.486 "max_latency_us": 94848.46545454545 00:08:20.486 } 00:08:20.486 ], 00:08:20.486 "core_count": 1 00:08:20.486 } 00:08:20.745 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64454 00:08:20.745 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64454 ']' 00:08:20.745 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64454 00:08:20.745 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:20.745 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.745 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64454 00:08:20.745 killing process with pid 64454 00:08:20.745 Received shutdown signal, test time was about 10.000000 seconds 00:08:20.745 00:08:20.745 Latency(us) 00:08:20.745 [2024-11-20T16:53:39.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.745 [2024-11-20T16:53:39.281Z] =================================================================================================================== 00:08:20.745 [2024-11-20T16:53:39.281Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:20.745 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.745 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.745 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64454' 00:08:20.745 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64454 00:08:20.745 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64454 00:08:20.745 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:20.745 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:20.745 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:20.745 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:21.001 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:21.001 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:21.001 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:21.001 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:21.001 rmmod nvme_tcp 00:08:21.001 rmmod nvme_fabrics 00:08:21.001 rmmod nvme_keyring 00:08:21.001 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:21.001 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:21.001 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:21.001 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64434 ']' 00:08:21.001 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64434 00:08:21.001 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64434 ']' 00:08:21.001 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64434 00:08:21.001 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:21.001 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.001 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64434 00:08:21.001 killing process with pid 64434 00:08:21.001 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:21.001 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:21.001 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64434' 00:08:21.001 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64434 00:08:21.001 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64434 00:08:21.259 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:21.259 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:21.259 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:21.259 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:21.259 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:21.259 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:21.259 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:21.259 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:21.259 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:21.259 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:21.259 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:21.259 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:21.259 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:21.259 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:21.259 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:21.259 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:21.259 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:21.259 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:21.259 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:21.259 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:21.518 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:21.518 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:21.518 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:21.518 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.518 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.518 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.518 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:08:21.518 00:08:21.518 real 0m12.742s 00:08:21.518 user 0m21.632s 00:08:21.518 sys 0m2.153s 00:08:21.518 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.518 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:21.518 ************************************ 00:08:21.518 END TEST nvmf_queue_depth 00:08:21.518 ************************************ 00:08:21.518 16:53:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:21.518 16:53:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:21.518 16:53:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.518 16:53:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:21.518 ************************************ 00:08:21.518 START TEST nvmf_target_multipath 00:08:21.518 ************************************ 00:08:21.518 16:53:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:21.518 * Looking for test storage... 00:08:21.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:21.518 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:21.518 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:21.518 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:21.778 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:21.778 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.778 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.778 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.778 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.778 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.778 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.778 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.778 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.778 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.778 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.778 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.778 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:21.778 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:21.778 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.778 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.778 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:21.778 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:21.778 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.778 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:21.778 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:21.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.779 --rc genhtml_branch_coverage=1 00:08:21.779 --rc genhtml_function_coverage=1 00:08:21.779 --rc genhtml_legend=1 00:08:21.779 --rc geninfo_all_blocks=1 00:08:21.779 --rc geninfo_unexecuted_blocks=1 00:08:21.779 00:08:21.779 ' 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:21.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.779 --rc genhtml_branch_coverage=1 00:08:21.779 --rc genhtml_function_coverage=1 00:08:21.779 --rc genhtml_legend=1 00:08:21.779 --rc geninfo_all_blocks=1 00:08:21.779 --rc geninfo_unexecuted_blocks=1 00:08:21.779 00:08:21.779 ' 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:21.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.779 --rc genhtml_branch_coverage=1 00:08:21.779 --rc genhtml_function_coverage=1 00:08:21.779 --rc genhtml_legend=1 00:08:21.779 --rc geninfo_all_blocks=1 00:08:21.779 --rc geninfo_unexecuted_blocks=1 00:08:21.779 00:08:21.779 ' 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:21.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.779 --rc genhtml_branch_coverage=1 00:08:21.779 --rc genhtml_function_coverage=1 00:08:21.779 --rc genhtml_legend=1 00:08:21.779 --rc geninfo_all_blocks=1 00:08:21.779 --rc geninfo_unexecuted_blocks=1 00:08:21.779 00:08:21.779 ' 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.779 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:21.792 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:21.792 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:21.792 Cannot find device "nvmf_init_br" 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:21.793 Cannot find device "nvmf_init_br2" 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:21.793 Cannot find device "nvmf_tgt_br" 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:21.793 Cannot find device "nvmf_tgt_br2" 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:21.793 Cannot find device "nvmf_init_br" 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:21.793 Cannot find device "nvmf_init_br2" 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:21.793 Cannot find device "nvmf_tgt_br" 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:21.793 Cannot find device "nvmf_tgt_br2" 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:21.793 Cannot find device "nvmf_br" 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:21.793 Cannot find device "nvmf_init_if" 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:21.793 Cannot find device "nvmf_init_if2" 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:21.793 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:21.793 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:21.793 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:22.056 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:22.057 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:22.057 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:08:22.057 00:08:22.057 --- 10.0.0.3 ping statistics --- 00:08:22.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.057 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:22.057 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:22.057 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:08:22.057 00:08:22.057 --- 10.0.0.4 ping statistics --- 00:08:22.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.057 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:22.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:08:22.057 00:08:22.057 --- 10.0.0.1 ping statistics --- 00:08:22.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.057 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:22.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:08:22.057 00:08:22.057 --- 10.0.0.2 ping statistics --- 00:08:22.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.057 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64833 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64833 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 64833 ']' 00:08:22.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:22.057 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:22.057 [2024-11-20 16:53:40.582553] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:08:22.057 [2024-11-20 16:53:40.582641] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.323 [2024-11-20 16:53:40.736988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:22.323 [2024-11-20 16:53:40.797957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.323 [2024-11-20 16:53:40.798030] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.323 [2024-11-20 16:53:40.798044] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.323 [2024-11-20 16:53:40.798054] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.323 [2024-11-20 16:53:40.798064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.323 [2024-11-20 16:53:40.799339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.323 [2024-11-20 16:53:40.799477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.323 [2024-11-20 16:53:40.799590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.323 [2024-11-20 16:53:40.799594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.607 [2024-11-20 16:53:40.858585] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.607 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.607 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:08:22.607 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:22.607 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:22.607 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:22.607 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.607 16:53:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:22.880 [2024-11-20 16:53:41.218297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.880 16:53:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:23.143 Malloc0 00:08:23.143 16:53:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:23.408 16:53:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:23.672 16:53:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:23.936 [2024-11-20 16:53:42.412491] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:23.936 16:53:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:08:24.199 [2024-11-20 16:53:42.664733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:08:24.200 16:53:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid=52629f53-148b-4998-bf25-3a0243614ae8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:24.459 16:53:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid=52629f53-148b-4998-bf25-3a0243614ae8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:08:24.459 16:53:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:24.459 16:53:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:08:24.459 16:53:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:24.459 16:53:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:24.459 16:53:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64922 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:26.991 16:53:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:26.991 [global] 00:08:26.991 thread=1 00:08:26.991 invalidate=1 00:08:26.991 rw=randrw 00:08:26.991 time_based=1 00:08:26.991 runtime=6 00:08:26.991 ioengine=libaio 00:08:26.991 direct=1 00:08:26.991 bs=4096 00:08:26.991 iodepth=128 00:08:26.991 norandommap=0 00:08:26.991 numjobs=1 00:08:26.991 00:08:26.991 verify_dump=1 00:08:26.991 verify_backlog=512 00:08:26.991 verify_state_save=0 00:08:26.991 do_verify=1 00:08:26.991 verify=crc32c-intel 00:08:26.991 [job0] 00:08:26.991 filename=/dev/nvme0n1 00:08:26.991 Could not set queue depth (nvme0n1) 00:08:26.991 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:26.991 fio-3.35 00:08:26.991 Starting 1 thread 00:08:27.556 16:53:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:27.814 16:53:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:28.073 16:53:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:28.073 16:53:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:28.073 16:53:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:28.073 16:53:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:28.073 16:53:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:28.073 16:53:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:28.073 16:53:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:28.073 16:53:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:28.073 16:53:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:28.073 16:53:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:28.073 16:53:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:28.073 16:53:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:28.073 16:53:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:28.639 16:53:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:28.896 16:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:28.896 16:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:28.896 16:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:28.896 16:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:28.896 16:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:28.896 16:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:28.896 16:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:28.896 16:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:28.896 16:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:28.896 16:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:28.896 16:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:28.896 16:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:28.896 16:53:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64922 00:08:33.086 00:08:33.086 job0: (groupid=0, jobs=1): err= 0: pid=64943: Wed Nov 20 16:53:51 2024 00:08:33.086 read: IOPS=10.3k, BW=40.2MiB/s (42.1MB/s)(241MiB/6007msec) 00:08:33.086 slat (usec): min=4, max=5904, avg=57.12, stdev=221.75 00:08:33.086 clat (usec): min=1681, max=17229, avg=8423.92, stdev=1425.11 00:08:33.086 lat (usec): min=1691, max=17239, avg=8481.04, stdev=1427.96 00:08:33.086 clat percentiles (usec): 00:08:33.086 | 1.00th=[ 4293], 5.00th=[ 6587], 10.00th=[ 7242], 20.00th=[ 7767], 00:08:33.086 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 8455], 00:08:33.086 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[11863], 00:08:33.086 | 99.00th=[13173], 99.50th=[13435], 99.90th=[13960], 99.95th=[14222], 00:08:33.086 | 99.99th=[16712] 00:08:33.086 bw ( KiB/s): min= 5200, max=28296, per=51.91%, avg=21364.45, stdev=8130.81, samples=11 00:08:33.086 iops : min= 1300, max= 7074, avg=5341.09, stdev=2032.70, samples=11 00:08:33.086 write: IOPS=6191, BW=24.2MiB/s (25.4MB/s)(128MiB/5274msec); 0 zone resets 00:08:33.086 slat (usec): min=15, max=2664, avg=65.72, stdev=162.04 00:08:33.086 clat (usec): min=1351, max=17368, avg=7390.14, stdev=1237.92 00:08:33.086 lat (usec): min=1375, max=18199, avg=7455.86, stdev=1243.30 00:08:33.086 clat percentiles (usec): 00:08:33.086 | 1.00th=[ 3326], 5.00th=[ 4490], 10.00th=[ 6259], 20.00th=[ 6915], 00:08:33.086 | 30.00th=[ 7177], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7701], 00:08:33.086 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8586], 00:08:33.086 | 99.00th=[11338], 99.50th=[11863], 99.90th=[13042], 99.95th=[13829], 00:08:33.086 | 99.99th=[16909] 00:08:33.086 bw ( KiB/s): min= 5512, max=27464, per=86.62%, avg=21452.91, stdev=7917.15, samples=11 00:08:33.086 iops : min= 1378, max= 6866, avg=5363.18, stdev=1979.27, samples=11 00:08:33.087 lat (msec) : 2=0.02%, 4=1.52%, 10=93.30%, 20=5.16% 00:08:33.087 cpu : usr=5.46%, sys=21.41%, ctx=5535, majf=0, minf=102 00:08:33.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:33.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:33.087 issued rwts: total=61803,32652,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:33.087 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:33.087 00:08:33.087 Run status group 0 (all jobs): 00:08:33.087 READ: bw=40.2MiB/s (42.1MB/s), 40.2MiB/s-40.2MiB/s (42.1MB/s-42.1MB/s), io=241MiB (253MB), run=6007-6007msec 00:08:33.087 WRITE: bw=24.2MiB/s (25.4MB/s), 24.2MiB/s-24.2MiB/s (25.4MB/s-25.4MB/s), io=128MiB (134MB), run=5274-5274msec 00:08:33.087 00:08:33.087 Disk stats (read/write): 00:08:33.087 nvme0n1: ios=60928/32047, merge=0/0, ticks=492759/222760, in_queue=715519, util=98.62% 00:08:33.087 16:53:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:33.345 16:53:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:08:33.605 16:53:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:33.605 16:53:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:33.605 16:53:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:33.605 16:53:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:33.605 16:53:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:33.605 16:53:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:33.605 16:53:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:33.605 16:53:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:33.605 16:53:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:33.605 16:53:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:33.605 16:53:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:33.605 16:53:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:33.605 16:53:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:33.605 16:53:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:33.605 16:53:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65024 00:08:33.605 16:53:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:33.605 [global] 00:08:33.605 thread=1 00:08:33.605 invalidate=1 00:08:33.605 rw=randrw 00:08:33.605 time_based=1 00:08:33.605 runtime=6 00:08:33.605 ioengine=libaio 00:08:33.605 direct=1 00:08:33.605 bs=4096 00:08:33.605 iodepth=128 00:08:33.605 norandommap=0 00:08:33.605 numjobs=1 00:08:33.605 00:08:33.605 verify_dump=1 00:08:33.605 verify_backlog=512 00:08:33.605 verify_state_save=0 00:08:33.605 do_verify=1 00:08:33.605 verify=crc32c-intel 00:08:33.605 [job0] 00:08:33.605 filename=/dev/nvme0n1 00:08:33.605 Could not set queue depth (nvme0n1) 00:08:33.605 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:33.605 fio-3.35 00:08:33.605 Starting 1 thread 00:08:34.542 16:53:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:34.801 16:53:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:35.060 16:53:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:35.060 16:53:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:35.060 16:53:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:35.060 16:53:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:35.060 16:53:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:35.060 16:53:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:35.060 16:53:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:35.060 16:53:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:35.060 16:53:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:35.060 16:53:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:35.060 16:53:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:35.060 16:53:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:35.060 16:53:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:35.319 16:53:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:35.576 16:53:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:35.576 16:53:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:35.576 16:53:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:35.576 16:53:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:35.576 16:53:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:35.576 16:53:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:35.576 16:53:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:35.576 16:53:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:35.576 16:53:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:35.576 16:53:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:35.576 16:53:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:35.576 16:53:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:35.576 16:53:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65024 00:08:39.765 00:08:39.765 job0: (groupid=0, jobs=1): err= 0: pid=65045: Wed Nov 20 16:53:58 2024 00:08:39.765 read: IOPS=11.6k, BW=45.4MiB/s (47.6MB/s)(273MiB/6006msec) 00:08:39.765 slat (usec): min=2, max=6909, avg=42.31, stdev=190.42 00:08:39.765 clat (usec): min=1117, max=14925, avg=7491.03, stdev=1907.03 00:08:39.765 lat (usec): min=1130, max=14958, avg=7533.34, stdev=1921.76 00:08:39.765 clat percentiles (usec): 00:08:39.765 | 1.00th=[ 2933], 5.00th=[ 3982], 10.00th=[ 4817], 20.00th=[ 5932], 00:08:39.765 | 30.00th=[ 6849], 40.00th=[ 7504], 50.00th=[ 7832], 60.00th=[ 8094], 00:08:39.765 | 70.00th=[ 8291], 80.00th=[ 8717], 90.00th=[ 9110], 95.00th=[10421], 00:08:39.765 | 99.00th=[12780], 99.50th=[13042], 99.90th=[13566], 99.95th=[13566], 00:08:39.765 | 99.99th=[13960] 00:08:39.765 bw ( KiB/s): min= 5736, max=37864, per=53.65%, avg=24934.55, stdev=8589.08, samples=11 00:08:39.765 iops : min= 1434, max= 9466, avg=6233.64, stdev=2147.27, samples=11 00:08:39.765 write: IOPS=6985, BW=27.3MiB/s (28.6MB/s)(148MiB/5422msec); 0 zone resets 00:08:39.765 slat (usec): min=4, max=5379, avg=52.30, stdev=132.32 00:08:39.765 clat (usec): min=637, max=13787, avg=6334.79, stdev=1742.46 00:08:39.765 lat (usec): min=689, max=13845, avg=6387.10, stdev=1756.85 00:08:39.765 clat percentiles (usec): 00:08:39.765 | 1.00th=[ 2540], 5.00th=[ 3392], 10.00th=[ 3818], 20.00th=[ 4490], 00:08:39.765 | 30.00th=[ 5211], 40.00th=[ 6325], 50.00th=[ 6915], 60.00th=[ 7242], 00:08:39.765 | 70.00th=[ 7504], 80.00th=[ 7767], 90.00th=[ 8094], 95.00th=[ 8356], 00:08:39.765 | 99.00th=[10683], 99.50th=[11338], 99.90th=[12387], 99.95th=[12780], 00:08:39.765 | 99.99th=[13698] 00:08:39.765 bw ( KiB/s): min= 6000, max=37208, per=89.18%, avg=24921.45, stdev=8435.08, samples=11 00:08:39.765 iops : min= 1500, max= 9302, avg=6230.36, stdev=2108.77, samples=11 00:08:39.765 lat (usec) : 750=0.01%, 1000=0.01% 00:08:39.765 lat (msec) : 2=0.15%, 4=7.57%, 10=88.36%, 20=3.92% 00:08:39.765 cpu : usr=5.89%, sys=23.84%, ctx=6673, majf=0, minf=102 00:08:39.765 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:08:39.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:39.765 issued rwts: total=69782,37878,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:39.765 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:39.765 00:08:39.765 Run status group 0 (all jobs): 00:08:39.765 READ: bw=45.4MiB/s (47.6MB/s), 45.4MiB/s-45.4MiB/s (47.6MB/s-47.6MB/s), io=273MiB (286MB), run=6006-6006msec 00:08:39.765 WRITE: bw=27.3MiB/s (28.6MB/s), 27.3MiB/s-27.3MiB/s (28.6MB/s-28.6MB/s), io=148MiB (155MB), run=5422-5422msec 00:08:39.765 00:08:39.765 Disk stats (read/write): 00:08:39.765 nvme0n1: ios=69101/36966, merge=0/0, ticks=489288/213637, in_queue=702925, util=98.58% 00:08:39.765 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:39.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:40.024 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:40.024 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:08:40.024 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:40.024 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:40.024 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:40.024 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:40.024 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:08:40.024 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:40.283 rmmod nvme_tcp 00:08:40.283 rmmod nvme_fabrics 00:08:40.283 rmmod nvme_keyring 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64833 ']' 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64833 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 64833 ']' 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 64833 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64833 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:40.283 killing process with pid 64833 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64833' 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 64833 00:08:40.283 16:53:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 64833 00:08:40.542 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:40.542 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:40.542 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:40.542 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:40.542 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:40.542 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:40.542 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:40.542 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:40.542 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:40.542 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:40.542 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:40.800 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:40.800 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:40.800 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:40.800 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:40.800 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:40.800 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:40.800 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:40.800 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:40.800 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:40.800 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:40.800 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:40.800 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:40.800 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.800 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.800 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.801 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:08:40.801 00:08:40.801 real 0m19.363s 00:08:40.801 user 1m12.060s 00:08:40.801 sys 0m9.586s 00:08:40.801 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.801 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:40.801 ************************************ 00:08:40.801 END TEST nvmf_target_multipath 00:08:40.801 ************************************ 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:41.060 ************************************ 00:08:41.060 START TEST nvmf_zcopy 00:08:41.060 ************************************ 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:41.060 * Looking for test storage... 00:08:41.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.060 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:41.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.061 --rc genhtml_branch_coverage=1 00:08:41.061 --rc genhtml_function_coverage=1 00:08:41.061 --rc genhtml_legend=1 00:08:41.061 --rc geninfo_all_blocks=1 00:08:41.061 --rc geninfo_unexecuted_blocks=1 00:08:41.061 00:08:41.061 ' 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:41.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.061 --rc genhtml_branch_coverage=1 00:08:41.061 --rc genhtml_function_coverage=1 00:08:41.061 --rc genhtml_legend=1 00:08:41.061 --rc geninfo_all_blocks=1 00:08:41.061 --rc geninfo_unexecuted_blocks=1 00:08:41.061 00:08:41.061 ' 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:41.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.061 --rc genhtml_branch_coverage=1 00:08:41.061 --rc genhtml_function_coverage=1 00:08:41.061 --rc genhtml_legend=1 00:08:41.061 --rc geninfo_all_blocks=1 00:08:41.061 --rc geninfo_unexecuted_blocks=1 00:08:41.061 00:08:41.061 ' 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:41.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.061 --rc genhtml_branch_coverage=1 00:08:41.061 --rc genhtml_function_coverage=1 00:08:41.061 --rc genhtml_legend=1 00:08:41.061 --rc geninfo_all_blocks=1 00:08:41.061 --rc geninfo_unexecuted_blocks=1 00:08:41.061 00:08:41.061 ' 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:41.061 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:41.061 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:41.062 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:41.062 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.062 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:41.062 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:41.062 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:41.062 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:41.062 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:41.321 Cannot find device "nvmf_init_br" 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:41.321 Cannot find device "nvmf_init_br2" 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:41.321 Cannot find device "nvmf_tgt_br" 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:41.321 Cannot find device "nvmf_tgt_br2" 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:41.321 Cannot find device "nvmf_init_br" 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:41.321 Cannot find device "nvmf_init_br2" 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:41.321 Cannot find device "nvmf_tgt_br" 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:41.321 Cannot find device "nvmf_tgt_br2" 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:41.321 Cannot find device "nvmf_br" 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:41.321 Cannot find device "nvmf_init_if" 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:41.321 Cannot find device "nvmf_init_if2" 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:41.321 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:41.321 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:41.321 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:41.579 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:41.579 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:41.579 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:41.579 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:41.579 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:41.579 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:41.579 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:41.579 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:41.579 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:41.579 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:41.579 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:41.580 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:41.580 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:08:41.580 00:08:41.580 --- 10.0.0.3 ping statistics --- 00:08:41.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.580 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:41.580 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:41.580 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:08:41.580 00:08:41.580 --- 10.0.0.4 ping statistics --- 00:08:41.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.580 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:41.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:08:41.580 00:08:41.580 --- 10.0.0.1 ping statistics --- 00:08:41.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.580 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:41.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:08:41.580 00:08:41.580 --- 10.0.0.2 ping statistics --- 00:08:41.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.580 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65348 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65348 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65348 ']' 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.580 16:53:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:41.580 [2024-11-20 16:54:00.052473] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:08:41.580 [2024-11-20 16:54:00.052578] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.838 [2024-11-20 16:54:00.204270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.838 [2024-11-20 16:54:00.260360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.838 [2024-11-20 16:54:00.260460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.838 [2024-11-20 16:54:00.260472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.838 [2024-11-20 16:54:00.260481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.838 [2024-11-20 16:54:00.260488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.838 [2024-11-20 16:54:00.260878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.838 [2024-11-20 16:54:00.315828] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:42.774 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.774 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:42.774 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:42.774 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:42.774 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.774 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.774 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:42.774 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:42.774 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.774 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.774 [2024-11-20 16:54:01.111376] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.774 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.774 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:42.774 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.774 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.774 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.774 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:42.774 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.774 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.774 [2024-11-20 16:54:01.127470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:42.774 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.774 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:42.774 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.774 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.775 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.775 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:42.775 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.775 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.775 malloc0 00:08:42.775 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.775 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:42.775 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.775 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.775 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.775 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:42.775 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:42.775 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:42.775 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:42.775 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:42.775 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:42.775 { 00:08:42.775 "params": { 00:08:42.775 "name": "Nvme$subsystem", 00:08:42.775 "trtype": "$TEST_TRANSPORT", 00:08:42.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:42.775 "adrfam": "ipv4", 00:08:42.775 "trsvcid": "$NVMF_PORT", 00:08:42.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:42.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:42.775 "hdgst": ${hdgst:-false}, 00:08:42.775 "ddgst": ${ddgst:-false} 00:08:42.775 }, 00:08:42.775 "method": "bdev_nvme_attach_controller" 00:08:42.775 } 00:08:42.775 EOF 00:08:42.775 )") 00:08:42.775 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:42.775 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:42.775 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:42.775 16:54:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:42.775 "params": { 00:08:42.775 "name": "Nvme1", 00:08:42.775 "trtype": "tcp", 00:08:42.775 "traddr": "10.0.0.3", 00:08:42.775 "adrfam": "ipv4", 00:08:42.775 "trsvcid": "4420", 00:08:42.775 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:42.775 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:42.775 "hdgst": false, 00:08:42.775 "ddgst": false 00:08:42.775 }, 00:08:42.775 "method": "bdev_nvme_attach_controller" 00:08:42.775 }' 00:08:42.775 [2024-11-20 16:54:01.227171] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:08:42.775 [2024-11-20 16:54:01.227280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65387 ] 00:08:43.034 [2024-11-20 16:54:01.384290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.034 [2024-11-20 16:54:01.443242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.034 [2024-11-20 16:54:01.508744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:43.292 Running I/O for 10 seconds... 00:08:45.164 5938.00 IOPS, 46.39 MiB/s [2024-11-20T16:54:04.711Z] 5979.50 IOPS, 46.71 MiB/s [2024-11-20T16:54:05.647Z] 5985.00 IOPS, 46.76 MiB/s [2024-11-20T16:54:07.020Z] 5902.50 IOPS, 46.11 MiB/s [2024-11-20T16:54:07.953Z] 5927.40 IOPS, 46.31 MiB/s [2024-11-20T16:54:08.886Z] 5944.83 IOPS, 46.44 MiB/s [2024-11-20T16:54:09.819Z] 5959.14 IOPS, 46.56 MiB/s [2024-11-20T16:54:10.755Z] 5964.12 IOPS, 46.59 MiB/s [2024-11-20T16:54:11.689Z] 5968.33 IOPS, 46.63 MiB/s [2024-11-20T16:54:11.689Z] 5976.10 IOPS, 46.69 MiB/s 00:08:53.153 Latency(us) 00:08:53.153 [2024-11-20T16:54:11.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.153 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:53.153 Verification LBA range: start 0x0 length 0x1000 00:08:53.153 Nvme1n1 : 10.01 5977.72 46.70 0.00 0.00 21342.35 1318.17 31933.91 00:08:53.153 [2024-11-20T16:54:11.689Z] =================================================================================================================== 00:08:53.153 [2024-11-20T16:54:11.689Z] Total : 5977.72 46.70 0.00 0.00 21342.35 1318.17 31933.91 00:08:53.411 16:54:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65504 00:08:53.411 16:54:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:53.411 16:54:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:53.411 16:54:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.411 16:54:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:53.411 16:54:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:53.411 16:54:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:53.411 16:54:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:53.411 16:54:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:53.411 { 00:08:53.411 "params": { 00:08:53.411 "name": "Nvme$subsystem", 00:08:53.411 "trtype": "$TEST_TRANSPORT", 00:08:53.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.411 "adrfam": "ipv4", 00:08:53.411 "trsvcid": "$NVMF_PORT", 00:08:53.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.411 "hdgst": ${hdgst:-false}, 00:08:53.411 "ddgst": ${ddgst:-false} 00:08:53.411 }, 00:08:53.411 "method": "bdev_nvme_attach_controller" 00:08:53.411 } 00:08:53.411 EOF 00:08:53.411 )") 00:08:53.411 16:54:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:53.411 [2024-11-20 16:54:11.852599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.411 [2024-11-20 16:54:11.852643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.411 16:54:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:53.411 16:54:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:53.411 16:54:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:53.411 "params": { 00:08:53.411 "name": "Nvme1", 00:08:53.411 "trtype": "tcp", 00:08:53.411 "traddr": "10.0.0.3", 00:08:53.411 "adrfam": "ipv4", 00:08:53.411 "trsvcid": "4420", 00:08:53.411 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.411 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.411 "hdgst": false, 00:08:53.411 "ddgst": false 00:08:53.411 }, 00:08:53.411 "method": "bdev_nvme_attach_controller" 00:08:53.411 }' 00:08:53.411 [2024-11-20 16:54:11.860556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.411 [2024-11-20 16:54:11.860586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.411 [2024-11-20 16:54:11.872572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.411 [2024-11-20 16:54:11.872606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.411 [2024-11-20 16:54:11.884572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.411 [2024-11-20 16:54:11.884609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.411 [2024-11-20 16:54:11.896573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.411 [2024-11-20 16:54:11.896608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.411 [2024-11-20 16:54:11.901145] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:08:53.411 [2024-11-20 16:54:11.901258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65504 ] 00:08:53.411 [2024-11-20 16:54:11.908578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.411 [2024-11-20 16:54:11.908615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.411 [2024-11-20 16:54:11.920583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.411 [2024-11-20 16:54:11.920620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.411 [2024-11-20 16:54:11.932584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.411 [2024-11-20 16:54:11.932619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.411 [2024-11-20 16:54:11.940579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.411 [2024-11-20 16:54:11.940614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.669 [2024-11-20 16:54:11.948583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.669 [2024-11-20 16:54:11.948625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.669 [2024-11-20 16:54:11.956595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.669 [2024-11-20 16:54:11.956629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.669 [2024-11-20 16:54:11.964587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.669 [2024-11-20 16:54:11.964627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.669 [2024-11-20 16:54:11.972587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:11.972621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.670 [2024-11-20 16:54:11.980590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:11.980623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.670 [2024-11-20 16:54:11.988589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:11.988620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.670 [2024-11-20 16:54:11.996594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:11.996629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.670 [2024-11-20 16:54:12.004601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:12.004636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.670 [2024-11-20 16:54:12.012601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:12.012633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.670 [2024-11-20 16:54:12.020621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:12.020670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.670 [2024-11-20 16:54:12.028675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:12.028723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.670 [2024-11-20 16:54:12.036677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:12.036720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.670 [2024-11-20 16:54:12.044685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:12.044726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.670 [2024-11-20 16:54:12.052683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:12.052721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.670 [2024-11-20 16:54:12.060689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:12.060735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.670 [2024-11-20 16:54:12.063482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.670 [2024-11-20 16:54:12.072696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:12.072770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.670 [2024-11-20 16:54:12.084669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:12.084723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.670 [2024-11-20 16:54:12.096651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:12.096694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.670 [2024-11-20 16:54:12.108660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:12.108700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.670 [2024-11-20 16:54:12.120682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:12.120724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.670 [2024-11-20 16:54:12.128666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:12.128709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.670 [2024-11-20 16:54:12.136597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.670 [2024-11-20 16:54:12.140658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:12.140697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.670 [2024-11-20 16:54:12.148651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:12.148684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.670 [2024-11-20 16:54:12.160675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:12.160713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.670 [2024-11-20 16:54:12.168666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:12.168705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.670 [2024-11-20 16:54:12.176673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:12.176711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.670 [2024-11-20 16:54:12.188683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:12.188727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.670 [2024-11-20 16:54:12.198763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:53.670 [2024-11-20 16:54:12.200681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.670 [2024-11-20 16:54:12.200718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.928 [2024-11-20 16:54:12.212700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.928 [2024-11-20 16:54:12.212743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.928 [2024-11-20 16:54:12.224692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.928 [2024-11-20 16:54:12.224733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.928 [2024-11-20 16:54:12.232696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.928 [2024-11-20 16:54:12.232739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.928 [2024-11-20 16:54:12.240690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.928 [2024-11-20 16:54:12.240734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.928 [2024-11-20 16:54:12.248705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.928 [2024-11-20 16:54:12.248747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.928 [2024-11-20 16:54:12.260720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.928 [2024-11-20 16:54:12.260762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.929 [2024-11-20 16:54:12.268731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.929 [2024-11-20 16:54:12.268773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.929 [2024-11-20 16:54:12.276738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.929 [2024-11-20 16:54:12.276779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.929 [2024-11-20 16:54:12.284749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.929 [2024-11-20 16:54:12.284790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.929 [2024-11-20 16:54:12.292745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.929 [2024-11-20 16:54:12.292787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.929 [2024-11-20 16:54:12.300748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.929 [2024-11-20 16:54:12.300788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.929 [2024-11-20 16:54:12.308758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.929 [2024-11-20 16:54:12.308800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.929 [2024-11-20 16:54:12.316766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.929 [2024-11-20 16:54:12.316807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.929 Running I/O for 5 seconds... 00:08:53.929 [2024-11-20 16:54:12.328614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.929 [2024-11-20 16:54:12.328657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.929 [2024-11-20 16:54:12.338381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.929 [2024-11-20 16:54:12.338422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.929 [2024-11-20 16:54:12.350396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.929 [2024-11-20 16:54:12.350438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.929 [2024-11-20 16:54:12.361606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.929 [2024-11-20 16:54:12.361646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.929 [2024-11-20 16:54:12.372781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.929 [2024-11-20 16:54:12.372839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.929 [2024-11-20 16:54:12.388172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.929 [2024-11-20 16:54:12.388215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.929 [2024-11-20 16:54:12.398444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.929 [2024-11-20 16:54:12.398483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.929 [2024-11-20 16:54:12.413913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.929 [2024-11-20 16:54:12.413950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.929 [2024-11-20 16:54:12.431776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.929 [2024-11-20 16:54:12.431817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.929 [2024-11-20 16:54:12.446598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.929 [2024-11-20 16:54:12.446639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.929 [2024-11-20 16:54:12.462703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.929 [2024-11-20 16:54:12.462744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.187 [2024-11-20 16:54:12.479261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.187 [2024-11-20 16:54:12.479299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.187 [2024-11-20 16:54:12.496594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.187 [2024-11-20 16:54:12.496634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.187 [2024-11-20 16:54:12.506171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.187 [2024-11-20 16:54:12.506212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.187 [2024-11-20 16:54:12.517785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.187 [2024-11-20 16:54:12.517834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.187 [2024-11-20 16:54:12.528568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.187 [2024-11-20 16:54:12.528607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.187 [2024-11-20 16:54:12.541602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.187 [2024-11-20 16:54:12.541643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.187 [2024-11-20 16:54:12.551528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.187 [2024-11-20 16:54:12.551565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.187 [2024-11-20 16:54:12.563118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.187 [2024-11-20 16:54:12.563156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.187 [2024-11-20 16:54:12.575466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.187 [2024-11-20 16:54:12.575505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.187 [2024-11-20 16:54:12.585325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.187 [2024-11-20 16:54:12.585365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.187 [2024-11-20 16:54:12.598540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.187 [2024-11-20 16:54:12.598584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.187 [2024-11-20 16:54:12.609067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.187 [2024-11-20 16:54:12.609104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.187 [2024-11-20 16:54:12.620282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.187 [2024-11-20 16:54:12.620324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.187 [2024-11-20 16:54:12.631105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.187 [2024-11-20 16:54:12.631144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.187 [2024-11-20 16:54:12.645946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.187 [2024-11-20 16:54:12.645984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.187 [2024-11-20 16:54:12.655959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.187 [2024-11-20 16:54:12.656001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.187 [2024-11-20 16:54:12.672708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.187 [2024-11-20 16:54:12.672749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.187 [2024-11-20 16:54:12.689899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.187 [2024-11-20 16:54:12.689938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.187 [2024-11-20 16:54:12.706191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.187 [2024-11-20 16:54:12.706230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.447 [2024-11-20 16:54:12.722590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.447 [2024-11-20 16:54:12.722628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.447 [2024-11-20 16:54:12.740226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.447 [2024-11-20 16:54:12.740268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.447 [2024-11-20 16:54:12.756196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.447 [2024-11-20 16:54:12.756235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.447 [2024-11-20 16:54:12.765394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.447 [2024-11-20 16:54:12.765436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.447 [2024-11-20 16:54:12.781861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.447 [2024-11-20 16:54:12.781899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.447 [2024-11-20 16:54:12.798516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.447 [2024-11-20 16:54:12.798555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.447 [2024-11-20 16:54:12.807962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.447 [2024-11-20 16:54:12.807997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.447 [2024-11-20 16:54:12.823926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.447 [2024-11-20 16:54:12.823966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.447 [2024-11-20 16:54:12.840943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.447 [2024-11-20 16:54:12.840979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.447 [2024-11-20 16:54:12.858147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.447 [2024-11-20 16:54:12.858184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.447 [2024-11-20 16:54:12.874429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.447 [2024-11-20 16:54:12.874474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.447 [2024-11-20 16:54:12.893303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.447 [2024-11-20 16:54:12.893346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.447 [2024-11-20 16:54:12.908237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.447 [2024-11-20 16:54:12.908277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.447 [2024-11-20 16:54:12.918251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.447 [2024-11-20 16:54:12.918289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.447 [2024-11-20 16:54:12.932790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.447 [2024-11-20 16:54:12.932839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.447 [2024-11-20 16:54:12.949537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.447 [2024-11-20 16:54:12.949577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.447 [2024-11-20 16:54:12.964797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.447 [2024-11-20 16:54:12.964847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.447 [2024-11-20 16:54:12.974682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.447 [2024-11-20 16:54:12.974721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.731 [2024-11-20 16:54:12.990165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.731 [2024-11-20 16:54:12.990203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.731 [2024-11-20 16:54:13.007846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.731 [2024-11-20 16:54:13.007893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.731 [2024-11-20 16:54:13.022972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.731 [2024-11-20 16:54:13.023010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.731 [2024-11-20 16:54:13.038716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.731 [2024-11-20 16:54:13.038754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.731 [2024-11-20 16:54:13.056184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.731 [2024-11-20 16:54:13.056227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.731 [2024-11-20 16:54:13.072020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.731 [2024-11-20 16:54:13.072060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.731 [2024-11-20 16:54:13.089478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.731 [2024-11-20 16:54:13.089518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.731 [2024-11-20 16:54:13.106375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.731 [2024-11-20 16:54:13.106415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.731 [2024-11-20 16:54:13.122197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.731 [2024-11-20 16:54:13.122236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.731 [2024-11-20 16:54:13.139838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.731 [2024-11-20 16:54:13.139881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.731 [2024-11-20 16:54:13.156369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.731 [2024-11-20 16:54:13.156410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.731 [2024-11-20 16:54:13.166071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.731 [2024-11-20 16:54:13.166112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.732 [2024-11-20 16:54:13.180809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.732 [2024-11-20 16:54:13.180861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.732 [2024-11-20 16:54:13.190450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.732 [2024-11-20 16:54:13.190489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.732 [2024-11-20 16:54:13.203493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.732 [2024-11-20 16:54:13.203538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.732 [2024-11-20 16:54:13.213292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.732 [2024-11-20 16:54:13.213335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.732 [2024-11-20 16:54:13.224980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.732 [2024-11-20 16:54:13.225019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.732 [2024-11-20 16:54:13.239795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.732 [2024-11-20 16:54:13.239853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.732 [2024-11-20 16:54:13.255775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.732 [2024-11-20 16:54:13.255812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.990 [2024-11-20 16:54:13.274241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.990 [2024-11-20 16:54:13.274278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.990 [2024-11-20 16:54:13.288875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.990 [2024-11-20 16:54:13.288910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.990 [2024-11-20 16:54:13.305041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.990 [2024-11-20 16:54:13.305080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.990 11449.00 IOPS, 89.45 MiB/s [2024-11-20T16:54:13.526Z] [2024-11-20 16:54:13.323414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.990 [2024-11-20 16:54:13.323452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.990 [2024-11-20 16:54:13.338211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.990 [2024-11-20 16:54:13.338250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.990 [2024-11-20 16:54:13.353721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.990 [2024-11-20 16:54:13.353758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.990 [2024-11-20 16:54:13.363002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.990 [2024-11-20 16:54:13.363045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.990 [2024-11-20 16:54:13.379137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.990 [2024-11-20 16:54:13.379174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.990 [2024-11-20 16:54:13.396243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.990 [2024-11-20 16:54:13.396283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.990 [2024-11-20 16:54:13.412270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.990 [2024-11-20 16:54:13.412309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.990 [2024-11-20 16:54:13.430425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.990 [2024-11-20 16:54:13.430466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.990 [2024-11-20 16:54:13.446167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.990 [2024-11-20 16:54:13.446205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.990 [2024-11-20 16:54:13.455791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.990 [2024-11-20 16:54:13.455843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.990 [2024-11-20 16:54:13.471817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.990 [2024-11-20 16:54:13.471879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.990 [2024-11-20 16:54:13.487323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.990 [2024-11-20 16:54:13.487369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.990 [2024-11-20 16:54:13.496809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.990 [2024-11-20 16:54:13.496862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.990 [2024-11-20 16:54:13.512619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.990 [2024-11-20 16:54:13.512660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.249 [2024-11-20 16:54:13.530666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.249 [2024-11-20 16:54:13.530707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.249 [2024-11-20 16:54:13.545348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.249 [2024-11-20 16:54:13.545394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.249 [2024-11-20 16:54:13.560818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.249 [2024-11-20 16:54:13.560871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.249 [2024-11-20 16:54:13.578664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.249 [2024-11-20 16:54:13.578707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.249 [2024-11-20 16:54:13.593725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.249 [2024-11-20 16:54:13.593773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.249 [2024-11-20 16:54:13.611164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.249 [2024-11-20 16:54:13.611202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.249 [2024-11-20 16:54:13.628290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.249 [2024-11-20 16:54:13.628329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.249 [2024-11-20 16:54:13.645835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.249 [2024-11-20 16:54:13.645871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.249 [2024-11-20 16:54:13.660454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.249 [2024-11-20 16:54:13.660499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.249 [2024-11-20 16:54:13.676466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.249 [2024-11-20 16:54:13.676507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.249 [2024-11-20 16:54:13.692336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.249 [2024-11-20 16:54:13.692378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.249 [2024-11-20 16:54:13.710114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.249 [2024-11-20 16:54:13.710152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.249 [2024-11-20 16:54:13.725626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.249 [2024-11-20 16:54:13.725669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.249 [2024-11-20 16:54:13.742973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.249 [2024-11-20 16:54:13.743012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.249 [2024-11-20 16:54:13.757325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.249 [2024-11-20 16:54:13.757362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.249 [2024-11-20 16:54:13.772429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.249 [2024-11-20 16:54:13.772469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.249 [2024-11-20 16:54:13.781484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.249 [2024-11-20 16:54:13.781523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.507 [2024-11-20 16:54:13.795101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.507 [2024-11-20 16:54:13.795140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.507 [2024-11-20 16:54:13.810001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.507 [2024-11-20 16:54:13.810038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.507 [2024-11-20 16:54:13.819838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.507 [2024-11-20 16:54:13.819892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.507 [2024-11-20 16:54:13.836901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.507 [2024-11-20 16:54:13.836942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.507 [2024-11-20 16:54:13.853997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.507 [2024-11-20 16:54:13.854035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.507 [2024-11-20 16:54:13.870299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.507 [2024-11-20 16:54:13.870335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.507 [2024-11-20 16:54:13.886895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.507 [2024-11-20 16:54:13.886933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.507 [2024-11-20 16:54:13.904558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.507 [2024-11-20 16:54:13.904598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.507 [2024-11-20 16:54:13.919448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.507 [2024-11-20 16:54:13.919488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.507 [2024-11-20 16:54:13.935494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.507 [2024-11-20 16:54:13.935533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.507 [2024-11-20 16:54:13.951757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.507 [2024-11-20 16:54:13.951795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.507 [2024-11-20 16:54:13.969852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.507 [2024-11-20 16:54:13.969892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.507 [2024-11-20 16:54:13.985088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.507 [2024-11-20 16:54:13.985128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.507 [2024-11-20 16:54:13.995068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.507 [2024-11-20 16:54:13.995107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.507 [2024-11-20 16:54:14.011297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.507 [2024-11-20 16:54:14.011337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.507 [2024-11-20 16:54:14.028800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.507 [2024-11-20 16:54:14.028852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.765 [2024-11-20 16:54:14.045987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.765 [2024-11-20 16:54:14.046026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.765 [2024-11-20 16:54:14.063494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.765 [2024-11-20 16:54:14.063533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.765 [2024-11-20 16:54:14.078121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.765 [2024-11-20 16:54:14.078163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.765 [2024-11-20 16:54:14.094985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.765 [2024-11-20 16:54:14.095024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.765 [2024-11-20 16:54:14.109907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.765 [2024-11-20 16:54:14.109946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.765 [2024-11-20 16:54:14.119070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.765 [2024-11-20 16:54:14.119108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.765 [2024-11-20 16:54:14.135603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.765 [2024-11-20 16:54:14.135641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.765 [2024-11-20 16:54:14.152895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.765 [2024-11-20 16:54:14.152932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.765 [2024-11-20 16:54:14.168693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.765 [2024-11-20 16:54:14.168731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.765 [2024-11-20 16:54:14.185540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.765 [2024-11-20 16:54:14.185579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.766 [2024-11-20 16:54:14.202346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.766 [2024-11-20 16:54:14.202387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.766 [2024-11-20 16:54:14.218800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.766 [2024-11-20 16:54:14.218851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.766 [2024-11-20 16:54:14.235185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.766 [2024-11-20 16:54:14.235224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.766 [2024-11-20 16:54:14.251525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.766 [2024-11-20 16:54:14.251571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.766 [2024-11-20 16:54:14.260987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.766 [2024-11-20 16:54:14.261026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.766 [2024-11-20 16:54:14.276646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.766 [2024-11-20 16:54:14.276686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.766 [2024-11-20 16:54:14.294529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.766 [2024-11-20 16:54:14.294566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.023 [2024-11-20 16:54:14.310901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.023 [2024-11-20 16:54:14.310936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.023 11537.50 IOPS, 90.14 MiB/s [2024-11-20T16:54:14.559Z] [2024-11-20 16:54:14.327670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.023 [2024-11-20 16:54:14.327708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.023 [2024-11-20 16:54:14.344583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.023 [2024-11-20 16:54:14.344622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.023 [2024-11-20 16:54:14.360099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.023 [2024-11-20 16:54:14.360140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.023 [2024-11-20 16:54:14.375672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.023 [2024-11-20 16:54:14.375711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.023 [2024-11-20 16:54:14.395292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.023 [2024-11-20 16:54:14.395330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.023 [2024-11-20 16:54:14.410213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.023 [2024-11-20 16:54:14.410252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.023 [2024-11-20 16:54:14.427578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.023 [2024-11-20 16:54:14.427615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.023 [2024-11-20 16:54:14.443475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.023 [2024-11-20 16:54:14.443515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.023 [2024-11-20 16:54:14.453487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.023 [2024-11-20 16:54:14.453528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.023 [2024-11-20 16:54:14.468565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.023 [2024-11-20 16:54:14.468606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.023 [2024-11-20 16:54:14.486247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.023 [2024-11-20 16:54:14.486285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.023 [2024-11-20 16:54:14.501956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.023 [2024-11-20 16:54:14.501994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.023 [2024-11-20 16:54:14.512002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.023 [2024-11-20 16:54:14.512042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.023 [2024-11-20 16:54:14.527004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.023 [2024-11-20 16:54:14.527044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.023 [2024-11-20 16:54:14.545194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.023 [2024-11-20 16:54:14.545237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.282 [2024-11-20 16:54:14.560135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.282 [2024-11-20 16:54:14.560174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.282 [2024-11-20 16:54:14.569853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.282 [2024-11-20 16:54:14.569891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.282 [2024-11-20 16:54:14.586676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.282 [2024-11-20 16:54:14.586715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.282 [2024-11-20 16:54:14.602848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.282 [2024-11-20 16:54:14.602880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.282 [2024-11-20 16:54:14.620955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.282 [2024-11-20 16:54:14.620995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.282 [2024-11-20 16:54:14.636066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.282 [2024-11-20 16:54:14.636106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.282 [2024-11-20 16:54:14.645725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.282 [2024-11-20 16:54:14.645762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.282 [2024-11-20 16:54:14.662404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.282 [2024-11-20 16:54:14.662442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.282 [2024-11-20 16:54:14.678516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.282 [2024-11-20 16:54:14.678555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.282 [2024-11-20 16:54:14.696444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.282 [2024-11-20 16:54:14.696484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.282 [2024-11-20 16:54:14.711140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.282 [2024-11-20 16:54:14.711179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.282 [2024-11-20 16:54:14.720304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.282 [2024-11-20 16:54:14.720352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.282 [2024-11-20 16:54:14.735787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.282 [2024-11-20 16:54:14.735837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.282 [2024-11-20 16:54:14.751452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.282 [2024-11-20 16:54:14.751491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.282 [2024-11-20 16:54:14.768992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.282 [2024-11-20 16:54:14.769030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.282 [2024-11-20 16:54:14.785408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.282 [2024-11-20 16:54:14.785461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.282 [2024-11-20 16:54:14.804375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.282 [2024-11-20 16:54:14.804417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.540 [2024-11-20 16:54:14.819516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.540 [2024-11-20 16:54:14.819554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.540 [2024-11-20 16:54:14.836181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.540 [2024-11-20 16:54:14.836220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.540 [2024-11-20 16:54:14.854707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.540 [2024-11-20 16:54:14.854748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.540 [2024-11-20 16:54:14.869753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.540 [2024-11-20 16:54:14.869795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.540 [2024-11-20 16:54:14.886143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.540 [2024-11-20 16:54:14.886183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.540 [2024-11-20 16:54:14.903948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.540 [2024-11-20 16:54:14.904011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.540 [2024-11-20 16:54:14.919102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.540 [2024-11-20 16:54:14.919145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.540 [2024-11-20 16:54:14.935410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.540 [2024-11-20 16:54:14.935448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.540 [2024-11-20 16:54:14.951871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.540 [2024-11-20 16:54:14.951919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.540 [2024-11-20 16:54:14.969927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.540 [2024-11-20 16:54:14.969966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.540 [2024-11-20 16:54:14.984952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.540 [2024-11-20 16:54:14.984992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.540 [2024-11-20 16:54:14.995327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.540 [2024-11-20 16:54:14.995365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.540 [2024-11-20 16:54:15.011006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.540 [2024-11-20 16:54:15.011045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.541 [2024-11-20 16:54:15.027973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.541 [2024-11-20 16:54:15.028031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.541 [2024-11-20 16:54:15.044758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.541 [2024-11-20 16:54:15.044798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.541 [2024-11-20 16:54:15.061283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.541 [2024-11-20 16:54:15.061321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.541 [2024-11-20 16:54:15.070749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.541 [2024-11-20 16:54:15.070800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.800 [2024-11-20 16:54:15.086965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.800 [2024-11-20 16:54:15.087004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.800 [2024-11-20 16:54:15.103146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.800 [2024-11-20 16:54:15.103185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.800 [2024-11-20 16:54:15.112661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.800 [2024-11-20 16:54:15.112700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.800 [2024-11-20 16:54:15.128733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.800 [2024-11-20 16:54:15.128773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.800 [2024-11-20 16:54:15.138661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.800 [2024-11-20 16:54:15.138700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.800 [2024-11-20 16:54:15.150211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.800 [2024-11-20 16:54:15.150255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.800 [2024-11-20 16:54:15.161238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.800 [2024-11-20 16:54:15.161278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.800 [2024-11-20 16:54:15.175706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.800 [2024-11-20 16:54:15.175745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.800 [2024-11-20 16:54:15.185194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.800 [2024-11-20 16:54:15.185231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.800 [2024-11-20 16:54:15.201090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.800 [2024-11-20 16:54:15.201129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.800 [2024-11-20 16:54:15.210707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.800 [2024-11-20 16:54:15.210744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.800 [2024-11-20 16:54:15.222570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.800 [2024-11-20 16:54:15.222612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.800 [2024-11-20 16:54:15.237701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.800 [2024-11-20 16:54:15.237741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.800 [2024-11-20 16:54:15.253637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.800 [2024-11-20 16:54:15.253678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.800 [2024-11-20 16:54:15.271696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.800 [2024-11-20 16:54:15.271743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.800 [2024-11-20 16:54:15.287772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.800 [2024-11-20 16:54:15.287820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.800 [2024-11-20 16:54:15.296998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.800 [2024-11-20 16:54:15.297036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.800 [2024-11-20 16:54:15.313441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.800 [2024-11-20 16:54:15.313479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.800 11543.00 IOPS, 90.18 MiB/s [2024-11-20T16:54:15.336Z] [2024-11-20 16:54:15.323594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.800 [2024-11-20 16:54:15.323631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.058 [2024-11-20 16:54:15.335126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.058 [2024-11-20 16:54:15.335161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.058 [2024-11-20 16:54:15.346022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.058 [2024-11-20 16:54:15.346059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.058 [2024-11-20 16:54:15.360938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.058 [2024-11-20 16:54:15.360977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.058 [2024-11-20 16:54:15.378327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.058 [2024-11-20 16:54:15.378381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.058 [2024-11-20 16:54:15.393101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.058 [2024-11-20 16:54:15.393141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.058 [2024-11-20 16:54:15.408664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.058 [2024-11-20 16:54:15.408702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.058 [2024-11-20 16:54:15.427287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.058 [2024-11-20 16:54:15.427328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.058 [2024-11-20 16:54:15.441976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.058 [2024-11-20 16:54:15.442014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.058 [2024-11-20 16:54:15.457755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.058 [2024-11-20 16:54:15.457795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.058 [2024-11-20 16:54:15.476608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.058 [2024-11-20 16:54:15.476650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.058 [2024-11-20 16:54:15.491531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.058 [2024-11-20 16:54:15.491570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.058 [2024-11-20 16:54:15.507284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.058 [2024-11-20 16:54:15.507327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.058 [2024-11-20 16:54:15.523523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.058 [2024-11-20 16:54:15.523563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.058 [2024-11-20 16:54:15.533536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.058 [2024-11-20 16:54:15.533575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.058 [2024-11-20 16:54:15.545160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.058 [2024-11-20 16:54:15.545198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.058 [2024-11-20 16:54:15.556149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.058 [2024-11-20 16:54:15.556189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.058 [2024-11-20 16:54:15.570774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.058 [2024-11-20 16:54:15.570814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.058 [2024-11-20 16:54:15.588152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.058 [2024-11-20 16:54:15.588191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.316 [2024-11-20 16:54:15.604206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.316 [2024-11-20 16:54:15.604244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.316 [2024-11-20 16:54:15.613821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.316 [2024-11-20 16:54:15.613873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.316 [2024-11-20 16:54:15.627042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.316 [2024-11-20 16:54:15.627081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.316 [2024-11-20 16:54:15.642748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.316 [2024-11-20 16:54:15.642798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.316 [2024-11-20 16:54:15.659332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.316 [2024-11-20 16:54:15.659371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.316 [2024-11-20 16:54:15.677773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.316 [2024-11-20 16:54:15.677814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.316 [2024-11-20 16:54:15.693118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.316 [2024-11-20 16:54:15.693161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.316 [2024-11-20 16:54:15.711163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.316 [2024-11-20 16:54:15.711207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.316 [2024-11-20 16:54:15.726305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.316 [2024-11-20 16:54:15.726345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.316 [2024-11-20 16:54:15.735979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.316 [2024-11-20 16:54:15.736018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.316 [2024-11-20 16:54:15.751957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.316 [2024-11-20 16:54:15.751997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.316 [2024-11-20 16:54:15.768277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.316 [2024-11-20 16:54:15.768321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.316 [2024-11-20 16:54:15.785865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.316 [2024-11-20 16:54:15.785904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.316 [2024-11-20 16:54:15.801207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.316 [2024-11-20 16:54:15.801246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.316 [2024-11-20 16:54:15.819973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.316 [2024-11-20 16:54:15.820014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.316 [2024-11-20 16:54:15.835289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.317 [2024-11-20 16:54:15.835327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.574 [2024-11-20 16:54:15.852770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.574 [2024-11-20 16:54:15.852812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.575 [2024-11-20 16:54:15.869322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.575 [2024-11-20 16:54:15.869360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.575 [2024-11-20 16:54:15.886062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.575 [2024-11-20 16:54:15.886099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.575 [2024-11-20 16:54:15.903663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.575 [2024-11-20 16:54:15.903703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.575 [2024-11-20 16:54:15.919186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.575 [2024-11-20 16:54:15.919230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.575 [2024-11-20 16:54:15.936859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.575 [2024-11-20 16:54:15.936909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.575 [2024-11-20 16:54:15.952330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.575 [2024-11-20 16:54:15.952372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.575 [2024-11-20 16:54:15.962380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.575 [2024-11-20 16:54:15.962424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.575 [2024-11-20 16:54:15.977448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.575 [2024-11-20 16:54:15.977491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.575 [2024-11-20 16:54:15.988137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.575 [2024-11-20 16:54:15.988183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.575 [2024-11-20 16:54:16.002852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.575 [2024-11-20 16:54:16.002889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.575 [2024-11-20 16:54:16.013014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.575 [2024-11-20 16:54:16.013053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.575 [2024-11-20 16:54:16.024613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.575 [2024-11-20 16:54:16.024654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.575 [2024-11-20 16:54:16.039595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.575 [2024-11-20 16:54:16.039639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.575 [2024-11-20 16:54:16.049689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.575 [2024-11-20 16:54:16.049727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.575 [2024-11-20 16:54:16.065427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.575 [2024-11-20 16:54:16.065468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.575 [2024-11-20 16:54:16.081309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.575 [2024-11-20 16:54:16.081348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.575 [2024-11-20 16:54:16.090876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.575 [2024-11-20 16:54:16.090916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.575 [2024-11-20 16:54:16.106326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.575 [2024-11-20 16:54:16.106365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.833 [2024-11-20 16:54:16.122261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.833 [2024-11-20 16:54:16.122315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.833 [2024-11-20 16:54:16.140464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.833 [2024-11-20 16:54:16.140551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.833 [2024-11-20 16:54:16.155388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.833 [2024-11-20 16:54:16.155449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.833 [2024-11-20 16:54:16.171638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.833 [2024-11-20 16:54:16.171697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.833 [2024-11-20 16:54:16.189929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.833 [2024-11-20 16:54:16.189980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.833 [2024-11-20 16:54:16.204617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.833 [2024-11-20 16:54:16.204680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.833 [2024-11-20 16:54:16.220514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.833 [2024-11-20 16:54:16.220572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.833 [2024-11-20 16:54:16.238787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.833 [2024-11-20 16:54:16.238866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.833 [2024-11-20 16:54:16.253797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.833 [2024-11-20 16:54:16.253851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.833 [2024-11-20 16:54:16.270224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.833 [2024-11-20 16:54:16.270279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.833 [2024-11-20 16:54:16.287447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.833 [2024-11-20 16:54:16.287530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.833 [2024-11-20 16:54:16.303112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.833 [2024-11-20 16:54:16.303155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.833 11531.25 IOPS, 90.09 MiB/s [2024-11-20T16:54:16.369Z] [2024-11-20 16:54:16.321665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.833 [2024-11-20 16:54:16.321705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.833 [2024-11-20 16:54:16.336646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.833 [2024-11-20 16:54:16.336687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.833 [2024-11-20 16:54:16.346060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.833 [2024-11-20 16:54:16.346100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.833 [2024-11-20 16:54:16.363381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.833 [2024-11-20 16:54:16.363434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.091 [2024-11-20 16:54:16.379252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.091 [2024-11-20 16:54:16.379294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.091 [2024-11-20 16:54:16.389412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.091 [2024-11-20 16:54:16.389455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.091 [2024-11-20 16:54:16.404272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.091 [2024-11-20 16:54:16.404317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.091 [2024-11-20 16:54:16.418984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.091 [2024-11-20 16:54:16.419022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.091 [2024-11-20 16:54:16.434345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.091 [2024-11-20 16:54:16.434384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.091 [2024-11-20 16:54:16.451197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.091 [2024-11-20 16:54:16.451250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.091 [2024-11-20 16:54:16.467622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.091 [2024-11-20 16:54:16.467663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.091 [2024-11-20 16:54:16.484527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.091 [2024-11-20 16:54:16.484569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.091 [2024-11-20 16:54:16.501162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.091 [2024-11-20 16:54:16.501203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.091 [2024-11-20 16:54:16.517121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.091 [2024-11-20 16:54:16.517163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.091 [2024-11-20 16:54:16.535476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.091 [2024-11-20 16:54:16.535517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.091 [2024-11-20 16:54:16.550844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.091 [2024-11-20 16:54:16.550895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.091 [2024-11-20 16:54:16.568960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.091 [2024-11-20 16:54:16.569018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.091 [2024-11-20 16:54:16.583992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.091 [2024-11-20 16:54:16.584033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.091 [2024-11-20 16:54:16.599719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.091 [2024-11-20 16:54:16.599760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.091 [2024-11-20 16:54:16.617945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.091 [2024-11-20 16:54:16.617986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.350 [2024-11-20 16:54:16.632577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.350 [2024-11-20 16:54:16.632633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.350 [2024-11-20 16:54:16.648448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.350 [2024-11-20 16:54:16.648490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.350 [2024-11-20 16:54:16.665643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.350 [2024-11-20 16:54:16.665700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.350 [2024-11-20 16:54:16.681689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.350 [2024-11-20 16:54:16.681757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.350 [2024-11-20 16:54:16.700696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.350 [2024-11-20 16:54:16.700736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.350 [2024-11-20 16:54:16.716067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.350 [2024-11-20 16:54:16.716108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.350 [2024-11-20 16:54:16.732868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.350 [2024-11-20 16:54:16.732919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.350 [2024-11-20 16:54:16.749538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.350 [2024-11-20 16:54:16.749581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.350 [2024-11-20 16:54:16.765964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.350 [2024-11-20 16:54:16.766002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.350 [2024-11-20 16:54:16.784812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.350 [2024-11-20 16:54:16.784877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.350 [2024-11-20 16:54:16.800366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.350 [2024-11-20 16:54:16.800409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.350 [2024-11-20 16:54:16.818423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.350 [2024-11-20 16:54:16.818492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.350 [2024-11-20 16:54:16.833515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.350 [2024-11-20 16:54:16.833570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.350 [2024-11-20 16:54:16.849498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.350 [2024-11-20 16:54:16.849548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.350 [2024-11-20 16:54:16.866961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.350 [2024-11-20 16:54:16.867016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.350 [2024-11-20 16:54:16.881910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.350 [2024-11-20 16:54:16.881970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.609 [2024-11-20 16:54:16.897921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.609 [2024-11-20 16:54:16.897976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.609 [2024-11-20 16:54:16.914408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.609 [2024-11-20 16:54:16.914500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.609 [2024-11-20 16:54:16.930837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.609 [2024-11-20 16:54:16.930910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.609 [2024-11-20 16:54:16.940410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.609 [2024-11-20 16:54:16.940484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.609 [2024-11-20 16:54:16.956497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.609 [2024-11-20 16:54:16.956568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.609 [2024-11-20 16:54:16.972317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.609 [2024-11-20 16:54:16.972398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.609 [2024-11-20 16:54:16.990246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.609 [2024-11-20 16:54:16.990306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.609 [2024-11-20 16:54:17.005946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.609 [2024-11-20 16:54:17.006008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.609 [2024-11-20 16:54:17.023341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.609 [2024-11-20 16:54:17.023429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.609 [2024-11-20 16:54:17.037820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.609 [2024-11-20 16:54:17.038030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.609 [2024-11-20 16:54:17.060088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.609 [2024-11-20 16:54:17.060129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.609 [2024-11-20 16:54:17.081877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.609 [2024-11-20 16:54:17.081926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.609 [2024-11-20 16:54:17.095985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.609 [2024-11-20 16:54:17.096024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.609 [2024-11-20 16:54:17.111909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.609 [2024-11-20 16:54:17.111946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.609 [2024-11-20 16:54:17.130070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.609 [2024-11-20 16:54:17.130110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.868 [2024-11-20 16:54:17.147016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.868 [2024-11-20 16:54:17.147055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.868 [2024-11-20 16:54:17.163700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.868 [2024-11-20 16:54:17.163761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.868 [2024-11-20 16:54:17.179239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.868 [2024-11-20 16:54:17.179291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.868 [2024-11-20 16:54:17.195978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.868 [2024-11-20 16:54:17.196021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.868 [2024-11-20 16:54:17.212374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.868 [2024-11-20 16:54:17.212441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.868 [2024-11-20 16:54:17.230751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.869 [2024-11-20 16:54:17.230805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.869 [2024-11-20 16:54:17.246062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.869 [2024-11-20 16:54:17.246117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.869 [2024-11-20 16:54:17.263964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.869 [2024-11-20 16:54:17.264024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.869 [2024-11-20 16:54:17.279344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.869 [2024-11-20 16:54:17.279395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.869 [2024-11-20 16:54:17.292972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.869 [2024-11-20 16:54:17.293023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.869 [2024-11-20 16:54:17.308224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.869 [2024-11-20 16:54:17.308263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.869 11435.80 IOPS, 89.34 MiB/s [2024-11-20T16:54:17.405Z] [2024-11-20 16:54:17.327090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.869 [2024-11-20 16:54:17.327127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.869 [2024-11-20 16:54:17.338133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.869 [2024-11-20 16:54:17.338171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.869 00:08:58.869 Latency(us) 00:08:58.869 [2024-11-20T16:54:17.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.869 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:58.869 Nvme1n1 : 5.02 11423.53 89.25 0.00 0.00 11196.36 4736.47 27644.28 00:08:58.869 [2024-11-20T16:54:17.405Z] =================================================================================================================== 00:08:58.869 [2024-11-20T16:54:17.405Z] Total : 11423.53 89.25 0.00 0.00 11196.36 4736.47 27644.28 00:08:58.869 [2024-11-20 16:54:17.350140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.869 [2024-11-20 16:54:17.350176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.869 [2024-11-20 16:54:17.362134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.869 [2024-11-20 16:54:17.362172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.869 [2024-11-20 16:54:17.374160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.869 [2024-11-20 16:54:17.374199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.869 [2024-11-20 16:54:17.386171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.869 [2024-11-20 16:54:17.386208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.869 [2024-11-20 16:54:17.398167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.869 [2024-11-20 16:54:17.398209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.127 [2024-11-20 16:54:17.410174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.127 [2024-11-20 16:54:17.410217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.127 [2024-11-20 16:54:17.422182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.127 [2024-11-20 16:54:17.422229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.127 [2024-11-20 16:54:17.434186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.127 [2024-11-20 16:54:17.434234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.127 [2024-11-20 16:54:17.446184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.127 [2024-11-20 16:54:17.446228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.127 [2024-11-20 16:54:17.458184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.127 [2024-11-20 16:54:17.458227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.127 [2024-11-20 16:54:17.470188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.127 [2024-11-20 16:54:17.470231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.127 [2024-11-20 16:54:17.482195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.127 [2024-11-20 16:54:17.482232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.127 [2024-11-20 16:54:17.494202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.127 [2024-11-20 16:54:17.494247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.127 [2024-11-20 16:54:17.506204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.127 [2024-11-20 16:54:17.506250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.127 [2024-11-20 16:54:17.518184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.127 [2024-11-20 16:54:17.518219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.127 [2024-11-20 16:54:17.530176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.127 [2024-11-20 16:54:17.530206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.127 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65504) - No such process 00:08:59.127 16:54:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65504 00:08:59.127 16:54:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.127 16:54:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.127 16:54:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.127 16:54:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.127 16:54:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:59.127 16:54:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.127 16:54:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.127 delay0 00:08:59.127 16:54:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.127 16:54:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:59.127 16:54:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.127 16:54:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.127 16:54:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.127 16:54:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:08:59.385 [2024-11-20 16:54:17.735024] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:05.973 Initializing NVMe Controllers 00:09:05.973 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:05.973 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:05.973 Initialization complete. Launching workers. 00:09:05.973 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 823 00:09:05.973 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1110, failed to submit 33 00:09:05.973 success 991, unsuccessful 119, failed 0 00:09:05.973 16:54:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:05.973 16:54:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:05.973 16:54:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:05.973 16:54:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:05.973 16:54:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:05.973 16:54:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:05.973 16:54:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:05.973 16:54:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:05.973 rmmod nvme_tcp 00:09:05.973 rmmod nvme_fabrics 00:09:05.973 rmmod nvme_keyring 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65348 ']' 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65348 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65348 ']' 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65348 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65348 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:05.973 killing process with pid 65348 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65348' 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65348 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65348 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:09:05.973 00:09:05.973 real 0m25.154s 00:09:05.973 user 0m41.157s 00:09:05.973 sys 0m6.613s 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.973 ************************************ 00:09:05.973 END TEST nvmf_zcopy 00:09:05.973 ************************************ 00:09:05.973 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:06.232 16:54:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:06.232 16:54:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:06.232 16:54:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.232 16:54:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:06.232 ************************************ 00:09:06.233 START TEST nvmf_nmic 00:09:06.233 ************************************ 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:06.233 * Looking for test storage... 00:09:06.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:06.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.233 --rc genhtml_branch_coverage=1 00:09:06.233 --rc genhtml_function_coverage=1 00:09:06.233 --rc genhtml_legend=1 00:09:06.233 --rc geninfo_all_blocks=1 00:09:06.233 --rc geninfo_unexecuted_blocks=1 00:09:06.233 00:09:06.233 ' 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:06.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.233 --rc genhtml_branch_coverage=1 00:09:06.233 --rc genhtml_function_coverage=1 00:09:06.233 --rc genhtml_legend=1 00:09:06.233 --rc geninfo_all_blocks=1 00:09:06.233 --rc geninfo_unexecuted_blocks=1 00:09:06.233 00:09:06.233 ' 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:06.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.233 --rc genhtml_branch_coverage=1 00:09:06.233 --rc genhtml_function_coverage=1 00:09:06.233 --rc genhtml_legend=1 00:09:06.233 --rc geninfo_all_blocks=1 00:09:06.233 --rc geninfo_unexecuted_blocks=1 00:09:06.233 00:09:06.233 ' 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:06.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.233 --rc genhtml_branch_coverage=1 00:09:06.233 --rc genhtml_function_coverage=1 00:09:06.233 --rc genhtml_legend=1 00:09:06.233 --rc geninfo_all_blocks=1 00:09:06.233 --rc geninfo_unexecuted_blocks=1 00:09:06.233 00:09:06.233 ' 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.233 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:06.493 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:06.493 Cannot find device "nvmf_init_br" 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:06.493 Cannot find device "nvmf_init_br2" 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:06.493 Cannot find device "nvmf_tgt_br" 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:06.493 Cannot find device "nvmf_tgt_br2" 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:06.493 Cannot find device "nvmf_init_br" 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:06.493 Cannot find device "nvmf_init_br2" 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:06.493 Cannot find device "nvmf_tgt_br" 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:06.493 Cannot find device "nvmf_tgt_br2" 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:06.493 Cannot find device "nvmf_br" 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:06.493 Cannot find device "nvmf_init_if" 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:06.493 Cannot find device "nvmf_init_if2" 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:06.493 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:06.493 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:06.493 16:54:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:06.493 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:06.493 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:06.493 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:06.752 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:06.752 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:09:06.752 00:09:06.752 --- 10.0.0.3 ping statistics --- 00:09:06.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.752 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:06.752 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:06.752 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:09:06.752 00:09:06.752 --- 10.0.0.4 ping statistics --- 00:09:06.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.752 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:06.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:06.752 00:09:06.752 --- 10.0.0.1 ping statistics --- 00:09:06.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.752 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:06.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:09:06.752 00:09:06.752 --- 10.0.0.2 ping statistics --- 00:09:06.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.752 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65879 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:06.752 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65879 00:09:06.753 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 65879 ']' 00:09:06.753 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.753 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.753 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.753 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.753 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:06.753 [2024-11-20 16:54:25.266957] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:09:06.753 [2024-11-20 16:54:25.267065] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.011 [2024-11-20 16:54:25.420879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:07.011 [2024-11-20 16:54:25.490539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.011 [2024-11-20 16:54:25.490612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.011 [2024-11-20 16:54:25.490627] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.011 [2024-11-20 16:54:25.490638] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.011 [2024-11-20 16:54:25.490646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.011 [2024-11-20 16:54:25.491892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.011 [2024-11-20 16:54:25.491967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.011 [2024-11-20 16:54:25.492072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:07.011 [2024-11-20 16:54:25.492073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.270 [2024-11-20 16:54:25.550512] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.270 [2024-11-20 16:54:25.664333] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.270 Malloc0 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.270 [2024-11-20 16:54:25.732913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:07.270 test case1: single bdev can't be used in multiple subsystems 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.270 [2024-11-20 16:54:25.756709] bdev.c:8467:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:07.270 [2024-11-20 16:54:25.756900] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:07.270 [2024-11-20 16:54:25.756916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.270 request: 00:09:07.270 { 00:09:07.270 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:07.270 "namespace": { 00:09:07.270 "bdev_name": "Malloc0", 00:09:07.270 "no_auto_visible": false 00:09:07.270 }, 00:09:07.270 "method": "nvmf_subsystem_add_ns", 00:09:07.270 "req_id": 1 00:09:07.270 } 00:09:07.270 Got JSON-RPC error response 00:09:07.270 response: 00:09:07.270 { 00:09:07.270 "code": -32602, 00:09:07.270 "message": "Invalid parameters" 00:09:07.270 } 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:07.270 Adding namespace failed - expected result. 00:09:07.270 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:07.270 test case2: host connect to nvmf target in multiple paths 00:09:07.271 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:07.271 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.271 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.271 [2024-11-20 16:54:25.772882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:07.271 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.271 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid=52629f53-148b-4998-bf25-3a0243614ae8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:07.529 16:54:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid=52629f53-148b-4998-bf25-3a0243614ae8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:07.529 16:54:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:07.529 16:54:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:07.529 16:54:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:07.529 16:54:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:07.529 16:54:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:10.061 16:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:10.061 16:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:10.061 16:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:10.061 16:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:10.061 16:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:10.061 16:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:10.061 16:54:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:10.061 [global] 00:09:10.061 thread=1 00:09:10.061 invalidate=1 00:09:10.061 rw=write 00:09:10.061 time_based=1 00:09:10.061 runtime=1 00:09:10.061 ioengine=libaio 00:09:10.061 direct=1 00:09:10.061 bs=4096 00:09:10.061 iodepth=1 00:09:10.061 norandommap=0 00:09:10.061 numjobs=1 00:09:10.061 00:09:10.061 verify_dump=1 00:09:10.061 verify_backlog=512 00:09:10.061 verify_state_save=0 00:09:10.061 do_verify=1 00:09:10.061 verify=crc32c-intel 00:09:10.061 [job0] 00:09:10.061 filename=/dev/nvme0n1 00:09:10.061 Could not set queue depth (nvme0n1) 00:09:10.061 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.061 fio-3.35 00:09:10.061 Starting 1 thread 00:09:10.994 00:09:10.994 job0: (groupid=0, jobs=1): err= 0: pid=65963: Wed Nov 20 16:54:29 2024 00:09:10.994 read: IOPS=2915, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1001msec) 00:09:10.994 slat (nsec): min=12393, max=64507, avg=16854.02, stdev=4025.70 00:09:10.994 clat (usec): min=134, max=483, avg=177.36, stdev=22.42 00:09:10.994 lat (usec): min=147, max=496, avg=194.22, stdev=23.23 00:09:10.994 clat percentiles (usec): 00:09:10.994 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 161], 00:09:10.994 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 182], 00:09:10.994 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 202], 95.00th=[ 212], 00:09:10.994 | 99.00th=[ 233], 99.50th=[ 249], 99.90th=[ 433], 99.95th=[ 437], 00:09:10.994 | 99.99th=[ 486] 00:09:10.994 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:10.994 slat (nsec): min=14394, max=89500, avg=24269.23, stdev=5831.88 00:09:10.994 clat (usec): min=75, max=475, avg=113.01, stdev=20.15 00:09:10.994 lat (usec): min=102, max=496, avg=137.27, stdev=22.23 00:09:10.994 clat percentiles (usec): 00:09:10.994 | 1.00th=[ 88], 5.00th=[ 91], 10.00th=[ 94], 20.00th=[ 98], 00:09:10.994 | 30.00th=[ 103], 40.00th=[ 106], 50.00th=[ 111], 60.00th=[ 115], 00:09:10.994 | 70.00th=[ 119], 80.00th=[ 126], 90.00th=[ 135], 95.00th=[ 143], 00:09:10.994 | 99.00th=[ 165], 99.50th=[ 184], 99.90th=[ 326], 99.95th=[ 338], 00:09:10.994 | 99.99th=[ 478] 00:09:10.994 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:10.994 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:10.994 lat (usec) : 100=12.19%, 250=87.45%, 500=0.37% 00:09:10.994 cpu : usr=2.20%, sys=10.10%, ctx=5990, majf=0, minf=5 00:09:10.994 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.994 issued rwts: total=2918,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.994 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.994 00:09:10.994 Run status group 0 (all jobs): 00:09:10.994 READ: bw=11.4MiB/s (11.9MB/s), 11.4MiB/s-11.4MiB/s (11.9MB/s-11.9MB/s), io=11.4MiB (12.0MB), run=1001-1001msec 00:09:10.994 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:10.994 00:09:10.994 Disk stats (read/write): 00:09:10.994 nvme0n1: ios=2610/2890, merge=0/0, ticks=479/349, in_queue=828, util=91.48% 00:09:10.994 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:10.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:10.994 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:10.995 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:10.995 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:10.995 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:10.995 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:10.995 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:10.995 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:10.995 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:10.995 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:10.995 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:10.995 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:10.995 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:10.995 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:10.995 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:10.995 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:10.995 rmmod nvme_tcp 00:09:10.995 rmmod nvme_fabrics 00:09:10.995 rmmod nvme_keyring 00:09:10.995 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:10.995 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:10.995 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:10.995 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65879 ']' 00:09:10.995 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65879 00:09:10.995 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 65879 ']' 00:09:10.995 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 65879 00:09:10.995 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:10.995 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.995 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65879 00:09:11.253 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.253 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.253 killing process with pid 65879 00:09:11.253 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65879' 00:09:11.253 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 65879 00:09:11.253 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 65879 00:09:11.253 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:11.253 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:11.253 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:11.253 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:11.253 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:11.253 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:11.253 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:11.511 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:11.512 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:11.512 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:11.512 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:11.512 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:11.512 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:11.512 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:11.512 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:11.512 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:11.512 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:11.512 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:11.512 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:11.512 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:11.512 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:11.512 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:11.512 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:11.512 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.512 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.512 16:54:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.512 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:09:11.512 00:09:11.512 real 0m5.477s 00:09:11.512 user 0m15.899s 00:09:11.512 sys 0m2.281s 00:09:11.512 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.512 ************************************ 00:09:11.512 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.512 END TEST nvmf_nmic 00:09:11.512 ************************************ 00:09:11.771 16:54:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:11.771 16:54:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:11.771 16:54:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:11.772 ************************************ 00:09:11.772 START TEST nvmf_fio_target 00:09:11.772 ************************************ 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:11.772 * Looking for test storage... 00:09:11.772 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:11.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.772 --rc genhtml_branch_coverage=1 00:09:11.772 --rc genhtml_function_coverage=1 00:09:11.772 --rc genhtml_legend=1 00:09:11.772 --rc geninfo_all_blocks=1 00:09:11.772 --rc geninfo_unexecuted_blocks=1 00:09:11.772 00:09:11.772 ' 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:11.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.772 --rc genhtml_branch_coverage=1 00:09:11.772 --rc genhtml_function_coverage=1 00:09:11.772 --rc genhtml_legend=1 00:09:11.772 --rc geninfo_all_blocks=1 00:09:11.772 --rc geninfo_unexecuted_blocks=1 00:09:11.772 00:09:11.772 ' 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:11.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.772 --rc genhtml_branch_coverage=1 00:09:11.772 --rc genhtml_function_coverage=1 00:09:11.772 --rc genhtml_legend=1 00:09:11.772 --rc geninfo_all_blocks=1 00:09:11.772 --rc geninfo_unexecuted_blocks=1 00:09:11.772 00:09:11.772 ' 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:11.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.772 --rc genhtml_branch_coverage=1 00:09:11.772 --rc genhtml_function_coverage=1 00:09:11.772 --rc genhtml_legend=1 00:09:11.772 --rc geninfo_all_blocks=1 00:09:11.772 --rc geninfo_unexecuted_blocks=1 00:09:11.772 00:09:11.772 ' 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.772 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:11.773 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:11.773 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:12.032 Cannot find device "nvmf_init_br" 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:12.032 Cannot find device "nvmf_init_br2" 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:12.032 Cannot find device "nvmf_tgt_br" 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:12.032 Cannot find device "nvmf_tgt_br2" 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:12.032 Cannot find device "nvmf_init_br" 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:12.032 Cannot find device "nvmf_init_br2" 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:12.032 Cannot find device "nvmf_tgt_br" 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:12.032 Cannot find device "nvmf_tgt_br2" 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:12.032 Cannot find device "nvmf_br" 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:12.032 Cannot find device "nvmf_init_if" 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:12.032 Cannot find device "nvmf_init_if2" 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:12.032 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:12.032 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:12.032 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:12.291 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:12.291 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:12.291 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:12.291 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:12.291 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:12.291 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:12.291 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:12.291 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:12.291 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:12.291 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:12.291 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:12.291 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:09:12.291 00:09:12.291 --- 10.0.0.3 ping statistics --- 00:09:12.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.291 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:12.291 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:12.291 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:12.291 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:09:12.291 00:09:12.291 --- 10.0.0.4 ping statistics --- 00:09:12.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.292 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:12.292 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:12.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:09:12.292 00:09:12.292 --- 10.0.0.1 ping statistics --- 00:09:12.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.292 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:09:12.292 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:12.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:09:12.292 00:09:12.292 --- 10.0.0.2 ping statistics --- 00:09:12.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.292 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:09:12.292 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.292 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:09:12.292 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:12.292 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.292 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:12.292 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:12.292 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.292 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:12.292 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:12.292 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:12.292 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:12.292 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:12.292 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:12.292 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66197 00:09:12.292 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:12.292 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66197 00:09:12.292 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66197 ']' 00:09:12.292 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.292 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.292 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.292 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.292 16:54:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:12.292 [2024-11-20 16:54:30.725141] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:09:12.292 [2024-11-20 16:54:30.725231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.551 [2024-11-20 16:54:30.872989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:12.551 [2024-11-20 16:54:30.926660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.551 [2024-11-20 16:54:30.926733] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.551 [2024-11-20 16:54:30.926745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.551 [2024-11-20 16:54:30.926753] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.551 [2024-11-20 16:54:30.926760] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.551 [2024-11-20 16:54:30.928028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.551 [2024-11-20 16:54:30.928171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.551 [2024-11-20 16:54:30.928217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.551 [2024-11-20 16:54:30.928221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.551 [2024-11-20 16:54:30.999413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:12.551 16:54:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.551 16:54:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:12.551 16:54:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:12.551 16:54:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:12.551 16:54:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:12.811 16:54:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.811 16:54:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:13.069 [2024-11-20 16:54:31.461485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.069 16:54:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:13.328 16:54:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:13.328 16:54:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:13.896 16:54:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:13.896 16:54:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:13.896 16:54:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:13.896 16:54:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:14.492 16:54:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:14.492 16:54:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:14.750 16:54:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:15.007 16:54:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:15.007 16:54:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:15.264 16:54:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:15.265 16:54:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:15.523 16:54:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:15.523 16:54:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:15.781 16:54:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:16.039 16:54:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:16.039 16:54:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:16.297 16:54:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:16.297 16:54:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:16.555 16:54:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:16.813 [2024-11-20 16:54:35.314552] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:16.813 16:54:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:17.378 16:54:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:17.636 16:54:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid=52629f53-148b-4998-bf25-3a0243614ae8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:17.636 16:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:17.636 16:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:17.636 16:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:17.636 16:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:17.636 16:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:17.636 16:54:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:20.163 16:54:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:20.163 16:54:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:20.163 16:54:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.163 16:54:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:20.163 16:54:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.164 16:54:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:20.164 16:54:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:20.164 [global] 00:09:20.164 thread=1 00:09:20.164 invalidate=1 00:09:20.164 rw=write 00:09:20.164 time_based=1 00:09:20.164 runtime=1 00:09:20.164 ioengine=libaio 00:09:20.164 direct=1 00:09:20.164 bs=4096 00:09:20.164 iodepth=1 00:09:20.164 norandommap=0 00:09:20.164 numjobs=1 00:09:20.164 00:09:20.164 verify_dump=1 00:09:20.164 verify_backlog=512 00:09:20.164 verify_state_save=0 00:09:20.164 do_verify=1 00:09:20.164 verify=crc32c-intel 00:09:20.164 [job0] 00:09:20.164 filename=/dev/nvme0n1 00:09:20.164 [job1] 00:09:20.164 filename=/dev/nvme0n2 00:09:20.164 [job2] 00:09:20.164 filename=/dev/nvme0n3 00:09:20.164 [job3] 00:09:20.164 filename=/dev/nvme0n4 00:09:20.164 Could not set queue depth (nvme0n1) 00:09:20.164 Could not set queue depth (nvme0n2) 00:09:20.164 Could not set queue depth (nvme0n3) 00:09:20.164 Could not set queue depth (nvme0n4) 00:09:20.164 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:20.164 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:20.164 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:20.164 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:20.164 fio-3.35 00:09:20.164 Starting 4 threads 00:09:21.099 00:09:21.099 job0: (groupid=0, jobs=1): err= 0: pid=66380: Wed Nov 20 16:54:39 2024 00:09:21.099 read: IOPS=1832, BW=7329KiB/s (7505kB/s)(7336KiB/1001msec) 00:09:21.099 slat (usec): min=11, max=231, avg=14.22, stdev= 6.15 00:09:21.099 clat (usec): min=138, max=1763, avg=286.72, stdev=63.86 00:09:21.099 lat (usec): min=152, max=1776, avg=300.94, stdev=65.27 00:09:21.099 clat percentiles (usec): 00:09:21.099 | 1.00th=[ 192], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 258], 00:09:21.099 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:09:21.099 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 383], 00:09:21.099 | 99.00th=[ 519], 99.50th=[ 537], 99.90th=[ 676], 99.95th=[ 1762], 00:09:21.099 | 99.99th=[ 1762] 00:09:21.099 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:21.099 slat (nsec): min=16423, max=93546, avg=20321.95, stdev=5293.23 00:09:21.099 clat (usec): min=93, max=674, avg=195.29, stdev=34.53 00:09:21.099 lat (usec): min=110, max=692, avg=215.61, stdev=35.09 00:09:21.099 clat percentiles (usec): 00:09:21.099 | 1.00th=[ 101], 5.00th=[ 117], 10.00th=[ 149], 20.00th=[ 188], 00:09:21.099 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:09:21.099 | 70.00th=[ 208], 80.00th=[ 212], 90.00th=[ 221], 95.00th=[ 235], 00:09:21.099 | 99.00th=[ 251], 99.50th=[ 260], 99.90th=[ 529], 99.95th=[ 562], 00:09:21.099 | 99.99th=[ 676] 00:09:21.099 bw ( KiB/s): min= 8192, max= 8192, per=19.65%, avg=8192.00, stdev= 0.00, samples=1 00:09:21.099 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:21.099 lat (usec) : 100=0.46%, 250=55.98%, 500=42.48%, 750=1.06% 00:09:21.099 lat (msec) : 2=0.03% 00:09:21.099 cpu : usr=1.40%, sys=5.40%, ctx=3883, majf=0, minf=7 00:09:21.099 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.099 issued rwts: total=1834,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.099 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.099 job1: (groupid=0, jobs=1): err= 0: pid=66381: Wed Nov 20 16:54:39 2024 00:09:21.099 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:21.099 slat (nsec): min=10963, max=80840, avg=14242.88, stdev=3153.60 00:09:21.099 clat (usec): min=132, max=1570, avg=159.27, stdev=28.32 00:09:21.099 lat (usec): min=144, max=1582, avg=173.51, stdev=28.75 00:09:21.099 clat percentiles (usec): 00:09:21.099 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:09:21.099 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 161], 00:09:21.099 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 180], 00:09:21.099 | 99.00th=[ 192], 99.50th=[ 198], 99.90th=[ 258], 99.95th=[ 281], 00:09:21.099 | 99.99th=[ 1565] 00:09:21.099 write: IOPS=3263, BW=12.7MiB/s (13.4MB/s)(12.8MiB/1001msec); 0 zone resets 00:09:21.099 slat (nsec): min=13580, max=59796, avg=19980.64, stdev=3263.23 00:09:21.099 clat (usec): min=92, max=521, avg=119.76, stdev=12.84 00:09:21.099 lat (usec): min=109, max=538, avg=139.74, stdev=13.55 00:09:21.099 clat percentiles (usec): 00:09:21.099 | 1.00th=[ 97], 5.00th=[ 103], 10.00th=[ 106], 20.00th=[ 111], 00:09:21.099 | 30.00th=[ 115], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 123], 00:09:21.099 | 70.00th=[ 125], 80.00th=[ 128], 90.00th=[ 133], 95.00th=[ 139], 00:09:21.100 | 99.00th=[ 149], 99.50th=[ 153], 99.90th=[ 172], 99.95th=[ 192], 00:09:21.100 | 99.99th=[ 523] 00:09:21.100 bw ( KiB/s): min=12552, max=12552, per=30.10%, avg=12552.00, stdev= 0.00, samples=1 00:09:21.100 iops : min= 3138, max= 3138, avg=3138.00, stdev= 0.00, samples=1 00:09:21.100 lat (usec) : 100=1.34%, 250=98.58%, 500=0.05%, 750=0.02% 00:09:21.100 lat (msec) : 2=0.02% 00:09:21.100 cpu : usr=2.30%, sys=8.60%, ctx=6340, majf=0, minf=3 00:09:21.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.100 issued rwts: total=3072,3267,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.100 job2: (groupid=0, jobs=1): err= 0: pid=66382: Wed Nov 20 16:54:39 2024 00:09:21.100 read: IOPS=1793, BW=7173KiB/s (7345kB/s)(7180KiB/1001msec) 00:09:21.100 slat (nsec): min=11915, max=61940, avg=14943.11, stdev=3708.41 00:09:21.100 clat (usec): min=160, max=1515, avg=282.46, stdev=49.10 00:09:21.100 lat (usec): min=175, max=1528, avg=297.40, stdev=49.66 00:09:21.100 clat percentiles (usec): 00:09:21.100 | 1.00th=[ 235], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 258], 00:09:21.100 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:09:21.100 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 314], 95.00th=[ 355], 00:09:21.100 | 99.00th=[ 453], 99.50th=[ 490], 99.90th=[ 783], 99.95th=[ 1516], 00:09:21.100 | 99.99th=[ 1516] 00:09:21.100 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:21.100 slat (nsec): min=17096, max=77733, avg=21047.72, stdev=4698.48 00:09:21.100 clat (usec): min=114, max=2533, avg=203.52, stdev=65.96 00:09:21.100 lat (usec): min=136, max=2572, avg=224.57, stdev=67.32 00:09:21.100 clat percentiles (usec): 00:09:21.100 | 1.00th=[ 126], 5.00th=[ 139], 10.00th=[ 169], 20.00th=[ 188], 00:09:21.100 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 204], 00:09:21.100 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 225], 95.00th=[ 249], 00:09:21.100 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 502], 99.95th=[ 1123], 00:09:21.100 | 99.99th=[ 2540] 00:09:21.100 bw ( KiB/s): min= 8192, max= 8192, per=19.65%, avg=8192.00, stdev= 0.00, samples=1 00:09:21.100 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:21.100 lat (usec) : 250=54.54%, 500=45.20%, 750=0.16%, 1000=0.03% 00:09:21.100 lat (msec) : 2=0.05%, 4=0.03% 00:09:21.100 cpu : usr=1.60%, sys=5.40%, ctx=3844, majf=0, minf=9 00:09:21.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.100 issued rwts: total=1795,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.100 job3: (groupid=0, jobs=1): err= 0: pid=66383: Wed Nov 20 16:54:39 2024 00:09:21.100 read: IOPS=2948, BW=11.5MiB/s (12.1MB/s)(11.5MiB/1001msec) 00:09:21.100 slat (nsec): min=10965, max=38918, avg=13230.07, stdev=2143.44 00:09:21.100 clat (usec): min=141, max=513, avg=168.20, stdev=14.46 00:09:21.100 lat (usec): min=153, max=525, avg=181.43, stdev=14.89 00:09:21.100 clat percentiles (usec): 00:09:21.100 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:09:21.100 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 00:09:21.100 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 192], 00:09:21.100 | 99.00th=[ 206], 99.50th=[ 212], 99.90th=[ 273], 99.95th=[ 347], 00:09:21.100 | 99.99th=[ 515] 00:09:21.100 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:21.100 slat (usec): min=12, max=103, avg=19.09, stdev= 4.11 00:09:21.100 clat (usec): min=98, max=420, avg=129.01, stdev=13.07 00:09:21.100 lat (usec): min=116, max=437, avg=148.10, stdev=14.31 00:09:21.100 clat percentiles (usec): 00:09:21.100 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 115], 20.00th=[ 119], 00:09:21.100 | 30.00th=[ 123], 40.00th=[ 126], 50.00th=[ 129], 60.00th=[ 131], 00:09:21.100 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 151], 00:09:21.100 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 182], 99.95th=[ 188], 00:09:21.100 | 99.99th=[ 420] 00:09:21.100 bw ( KiB/s): min=12288, max=12288, per=29.47%, avg=12288.00, stdev= 0.00, samples=1 00:09:21.100 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:21.100 lat (usec) : 100=0.02%, 250=99.92%, 500=0.05%, 750=0.02% 00:09:21.100 cpu : usr=1.90%, sys=7.90%, ctx=6023, majf=0, minf=17 00:09:21.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.100 issued rwts: total=2951,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.100 00:09:21.100 Run status group 0 (all jobs): 00:09:21.100 READ: bw=37.7MiB/s (39.5MB/s), 7173KiB/s-12.0MiB/s (7345kB/s-12.6MB/s), io=37.7MiB (39.5MB), run=1001-1001msec 00:09:21.100 WRITE: bw=40.7MiB/s (42.7MB/s), 8184KiB/s-12.7MiB/s (8380kB/s-13.4MB/s), io=40.8MiB (42.7MB), run=1001-1001msec 00:09:21.100 00:09:21.100 Disk stats (read/write): 00:09:21.100 nvme0n1: ios=1586/1784, merge=0/0, ticks=469/355, in_queue=824, util=86.67% 00:09:21.100 nvme0n2: ios=2595/2789, merge=0/0, ticks=446/355, in_queue=801, util=87.30% 00:09:21.100 nvme0n3: ios=1536/1706, merge=0/0, ticks=438/370, in_queue=808, util=88.89% 00:09:21.100 nvme0n4: ios=2525/2560, merge=0/0, ticks=444/342, in_queue=786, util=89.56% 00:09:21.100 16:54:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:21.100 [global] 00:09:21.100 thread=1 00:09:21.100 invalidate=1 00:09:21.100 rw=randwrite 00:09:21.100 time_based=1 00:09:21.100 runtime=1 00:09:21.100 ioengine=libaio 00:09:21.100 direct=1 00:09:21.100 bs=4096 00:09:21.100 iodepth=1 00:09:21.100 norandommap=0 00:09:21.100 numjobs=1 00:09:21.100 00:09:21.100 verify_dump=1 00:09:21.100 verify_backlog=512 00:09:21.100 verify_state_save=0 00:09:21.100 do_verify=1 00:09:21.100 verify=crc32c-intel 00:09:21.100 [job0] 00:09:21.100 filename=/dev/nvme0n1 00:09:21.100 [job1] 00:09:21.100 filename=/dev/nvme0n2 00:09:21.100 [job2] 00:09:21.100 filename=/dev/nvme0n3 00:09:21.100 [job3] 00:09:21.100 filename=/dev/nvme0n4 00:09:21.100 Could not set queue depth (nvme0n1) 00:09:21.100 Could not set queue depth (nvme0n2) 00:09:21.100 Could not set queue depth (nvme0n3) 00:09:21.100 Could not set queue depth (nvme0n4) 00:09:21.367 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:21.367 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:21.367 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:21.367 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:21.367 fio-3.35 00:09:21.367 Starting 4 threads 00:09:22.753 00:09:22.753 job0: (groupid=0, jobs=1): err= 0: pid=66436: Wed Nov 20 16:54:40 2024 00:09:22.753 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:22.753 slat (usec): min=13, max=213, avg=25.95, stdev=12.95 00:09:22.753 clat (usec): min=144, max=2187, avg=371.96, stdev=139.36 00:09:22.753 lat (usec): min=159, max=2202, avg=397.91, stdev=147.92 00:09:22.753 clat percentiles (usec): 00:09:22.753 | 1.00th=[ 215], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[ 285], 00:09:22.753 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 326], 00:09:22.753 | 70.00th=[ 343], 80.00th=[ 545], 90.00th=[ 611], 95.00th=[ 627], 00:09:22.753 | 99.00th=[ 652], 99.50th=[ 660], 99.90th=[ 685], 99.95th=[ 2180], 00:09:22.753 | 99.99th=[ 2180] 00:09:22.753 write: IOPS=1606, BW=6426KiB/s (6580kB/s)(6432KiB/1001msec); 0 zone resets 00:09:22.753 slat (nsec): min=19129, max=95744, avg=27833.97, stdev=8860.83 00:09:22.753 clat (usec): min=106, max=625, avg=208.36, stdev=42.72 00:09:22.753 lat (usec): min=128, max=652, avg=236.20, stdev=44.97 00:09:22.753 clat percentiles (usec): 00:09:22.753 | 1.00th=[ 115], 5.00th=[ 143], 10.00th=[ 159], 20.00th=[ 176], 00:09:22.753 | 30.00th=[ 182], 40.00th=[ 192], 50.00th=[ 204], 60.00th=[ 227], 00:09:22.753 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 265], 00:09:22.753 | 99.00th=[ 322], 99.50th=[ 359], 99.90th=[ 408], 99.95th=[ 627], 00:09:22.753 | 99.99th=[ 627] 00:09:22.753 bw ( KiB/s): min= 8192, max= 8192, per=23.41%, avg=8192.00, stdev= 0.00, samples=1 00:09:22.753 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:22.753 lat (usec) : 250=52.00%, 500=36.64%, 750=11.32% 00:09:22.753 lat (msec) : 4=0.03% 00:09:22.753 cpu : usr=1.60%, sys=7.10%, ctx=3145, majf=0, minf=13 00:09:22.753 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:22.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.753 issued rwts: total=1536,1608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.753 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:22.753 job1: (groupid=0, jobs=1): err= 0: pid=66437: Wed Nov 20 16:54:40 2024 00:09:22.753 read: IOPS=1532, BW=6132KiB/s (6279kB/s)(6144KiB/1002msec) 00:09:22.753 slat (nsec): min=8254, max=86858, avg=13188.07, stdev=4639.61 00:09:22.753 clat (usec): min=140, max=704, avg=328.75, stdev=60.82 00:09:22.753 lat (usec): min=153, max=729, avg=341.94, stdev=61.06 00:09:22.753 clat percentiles (usec): 00:09:22.754 | 1.00th=[ 206], 5.00th=[ 229], 10.00th=[ 239], 20.00th=[ 297], 00:09:22.754 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 334], 00:09:22.754 | 70.00th=[ 347], 80.00th=[ 371], 90.00th=[ 404], 95.00th=[ 433], 00:09:22.754 | 99.00th=[ 486], 99.50th=[ 510], 99.90th=[ 553], 99.95th=[ 709], 00:09:22.754 | 99.99th=[ 709] 00:09:22.754 write: IOPS=2032, BW=8132KiB/s (8327kB/s)(8148KiB/1002msec); 0 zone resets 00:09:22.754 slat (nsec): min=10118, max=97801, avg=18212.62, stdev=5175.09 00:09:22.754 clat (usec): min=113, max=350, avg=212.08, stdev=29.96 00:09:22.754 lat (usec): min=130, max=382, avg=230.29, stdev=30.49 00:09:22.754 clat percentiles (usec): 00:09:22.754 | 1.00th=[ 147], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 186], 00:09:22.754 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 208], 60.00th=[ 219], 00:09:22.754 | 70.00th=[ 229], 80.00th=[ 241], 90.00th=[ 253], 95.00th=[ 262], 00:09:22.754 | 99.00th=[ 277], 99.50th=[ 310], 99.90th=[ 326], 99.95th=[ 334], 00:09:22.754 | 99.99th=[ 351] 00:09:22.754 bw ( KiB/s): min= 8104, max= 8192, per=23.29%, avg=8148.00, stdev=62.23, samples=2 00:09:22.754 iops : min= 2026, max= 2048, avg=2037.00, stdev=15.56, samples=2 00:09:22.754 lat (usec) : 250=56.65%, 500=42.99%, 750=0.36% 00:09:22.754 cpu : usr=1.70%, sys=4.40%, ctx=3574, majf=0, minf=9 00:09:22.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:22.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.754 issued rwts: total=1536,2037,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:22.754 job2: (groupid=0, jobs=1): err= 0: pid=66438: Wed Nov 20 16:54:40 2024 00:09:22.754 read: IOPS=3036, BW=11.9MiB/s (12.4MB/s)(11.9MiB/1001msec) 00:09:22.754 slat (nsec): min=10721, max=32731, avg=12916.30, stdev=2146.18 00:09:22.754 clat (usec): min=135, max=2103, avg=166.71, stdev=40.54 00:09:22.754 lat (usec): min=147, max=2117, avg=179.63, stdev=40.71 00:09:22.754 clat percentiles (usec): 00:09:22.754 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:09:22.754 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:09:22.754 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 188], 00:09:22.754 | 99.00th=[ 200], 99.50th=[ 204], 99.90th=[ 570], 99.95th=[ 652], 00:09:22.754 | 99.99th=[ 2114] 00:09:22.754 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:22.754 slat (nsec): min=13626, max=73094, avg=19171.25, stdev=3618.41 00:09:22.754 clat (usec): min=97, max=204, avg=125.36, stdev=11.47 00:09:22.754 lat (usec): min=114, max=277, avg=144.54, stdev=12.47 00:09:22.754 clat percentiles (usec): 00:09:22.754 | 1.00th=[ 103], 5.00th=[ 109], 10.00th=[ 113], 20.00th=[ 117], 00:09:22.754 | 30.00th=[ 120], 40.00th=[ 122], 50.00th=[ 125], 60.00th=[ 127], 00:09:22.754 | 70.00th=[ 131], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 147], 00:09:22.754 | 99.00th=[ 159], 99.50th=[ 163], 99.90th=[ 174], 99.95th=[ 190], 00:09:22.754 | 99.99th=[ 204] 00:09:22.754 bw ( KiB/s): min=12288, max=12288, per=35.12%, avg=12288.00, stdev= 0.00, samples=1 00:09:22.754 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:22.754 lat (usec) : 100=0.15%, 250=99.69%, 500=0.10%, 750=0.05% 00:09:22.754 lat (msec) : 4=0.02% 00:09:22.754 cpu : usr=1.90%, sys=8.20%, ctx=6112, majf=0, minf=15 00:09:22.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:22.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.754 issued rwts: total=3040,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:22.754 job3: (groupid=0, jobs=1): err= 0: pid=66439: Wed Nov 20 16:54:40 2024 00:09:22.754 read: IOPS=1607, BW=6430KiB/s (6584kB/s)(6436KiB/1001msec) 00:09:22.754 slat (nsec): min=8437, max=71466, avg=14941.93, stdev=4250.98 00:09:22.754 clat (usec): min=142, max=6469, avg=333.47, stdev=269.61 00:09:22.754 lat (usec): min=155, max=6481, avg=348.41, stdev=270.18 00:09:22.754 clat percentiles (usec): 00:09:22.754 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 169], 20.00th=[ 186], 00:09:22.754 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 334], 00:09:22.754 | 70.00th=[ 343], 80.00th=[ 396], 90.00th=[ 453], 95.00th=[ 474], 00:09:22.754 | 99.00th=[ 515], 99.50th=[ 652], 99.90th=[ 5145], 99.95th=[ 6456], 00:09:22.754 | 99.99th=[ 6456] 00:09:22.754 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:22.754 slat (nsec): min=10481, max=77197, avg=17825.29, stdev=4516.14 00:09:22.754 clat (usec): min=120, max=329, avg=193.75, stdev=40.41 00:09:22.754 lat (usec): min=138, max=357, avg=211.57, stdev=39.63 00:09:22.754 clat percentiles (usec): 00:09:22.754 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 153], 00:09:22.754 | 30.00th=[ 165], 40.00th=[ 180], 50.00th=[ 192], 60.00th=[ 202], 00:09:22.754 | 70.00th=[ 219], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 260], 00:09:22.754 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 285], 99.95th=[ 285], 00:09:22.754 | 99.99th=[ 330] 00:09:22.754 bw ( KiB/s): min= 8192, max= 8192, per=23.41%, avg=8192.00, stdev= 0.00, samples=1 00:09:22.754 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:22.754 lat (usec) : 250=59.83%, 500=39.57%, 750=0.38% 00:09:22.754 lat (msec) : 2=0.03%, 4=0.14%, 10=0.05% 00:09:22.754 cpu : usr=2.00%, sys=4.60%, ctx=3657, majf=0, minf=7 00:09:22.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:22.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.754 issued rwts: total=1609,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:22.754 00:09:22.754 Run status group 0 (all jobs): 00:09:22.754 READ: bw=30.1MiB/s (31.6MB/s), 6132KiB/s-11.9MiB/s (6279kB/s-12.4MB/s), io=30.2MiB (31.6MB), run=1001-1002msec 00:09:22.754 WRITE: bw=34.2MiB/s (35.8MB/s), 6426KiB/s-12.0MiB/s (6580kB/s-12.6MB/s), io=34.2MiB (35.9MB), run=1001-1002msec 00:09:22.754 00:09:22.754 Disk stats (read/write): 00:09:22.754 nvme0n1: ios=1397/1536, merge=0/0, ticks=485/336, in_queue=821, util=87.98% 00:09:22.754 nvme0n2: ios=1510/1536, merge=0/0, ticks=489/331, in_queue=820, util=89.27% 00:09:22.754 nvme0n3: ios=2560/2718, merge=0/0, ticks=429/360, in_queue=789, util=89.05% 00:09:22.754 nvme0n4: ios=1536/1536, merge=0/0, ticks=481/291, in_queue=772, util=88.44% 00:09:22.754 16:54:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:22.754 [global] 00:09:22.754 thread=1 00:09:22.754 invalidate=1 00:09:22.754 rw=write 00:09:22.754 time_based=1 00:09:22.754 runtime=1 00:09:22.754 ioengine=libaio 00:09:22.754 direct=1 00:09:22.754 bs=4096 00:09:22.754 iodepth=128 00:09:22.754 norandommap=0 00:09:22.754 numjobs=1 00:09:22.754 00:09:22.754 verify_dump=1 00:09:22.754 verify_backlog=512 00:09:22.754 verify_state_save=0 00:09:22.754 do_verify=1 00:09:22.754 verify=crc32c-intel 00:09:22.754 [job0] 00:09:22.754 filename=/dev/nvme0n1 00:09:22.754 [job1] 00:09:22.754 filename=/dev/nvme0n2 00:09:22.754 [job2] 00:09:22.754 filename=/dev/nvme0n3 00:09:22.754 [job3] 00:09:22.754 filename=/dev/nvme0n4 00:09:22.754 Could not set queue depth (nvme0n1) 00:09:22.754 Could not set queue depth (nvme0n2) 00:09:22.754 Could not set queue depth (nvme0n3) 00:09:22.754 Could not set queue depth (nvme0n4) 00:09:22.754 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:22.754 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:22.754 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:22.754 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:22.754 fio-3.35 00:09:22.754 Starting 4 threads 00:09:24.131 00:09:24.131 job0: (groupid=0, jobs=1): err= 0: pid=66504: Wed Nov 20 16:54:42 2024 00:09:24.131 read: IOPS=2227, BW=8911KiB/s (9124kB/s)(8964KiB/1006msec) 00:09:24.131 slat (usec): min=5, max=9636, avg=237.50, stdev=870.40 00:09:24.131 clat (usec): min=4667, max=51324, avg=29102.52, stdev=6716.28 00:09:24.131 lat (usec): min=9479, max=51340, avg=29340.01, stdev=6742.48 00:09:24.131 clat percentiles (usec): 00:09:24.131 | 1.00th=[16057], 5.00th=[19006], 10.00th=[21103], 20.00th=[23725], 00:09:24.131 | 30.00th=[25822], 40.00th=[27132], 50.00th=[28181], 60.00th=[30016], 00:09:24.131 | 70.00th=[31065], 80.00th=[33817], 90.00th=[38536], 95.00th=[42206], 00:09:24.131 | 99.00th=[46924], 99.50th=[51119], 99.90th=[51119], 99.95th=[51119], 00:09:24.131 | 99.99th=[51119] 00:09:24.131 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:09:24.131 slat (usec): min=8, max=11502, avg=175.91, stdev=690.48 00:09:24.131 clat (usec): min=12350, max=44954, avg=23834.82, stdev=6675.87 00:09:24.131 lat (usec): min=12702, max=44995, avg=24010.73, stdev=6716.92 00:09:24.131 clat percentiles (usec): 00:09:24.131 | 1.00th=[13304], 5.00th=[15139], 10.00th=[15926], 20.00th=[19006], 00:09:24.131 | 30.00th=[20317], 40.00th=[21627], 50.00th=[22152], 60.00th=[22938], 00:09:24.131 | 70.00th=[26084], 80.00th=[29492], 90.00th=[33817], 95.00th=[36963], 00:09:24.131 | 99.00th=[42730], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:09:24.131 | 99.99th=[44827] 00:09:24.131 bw ( KiB/s): min= 8769, max=11728, per=28.27%, avg=10248.50, stdev=2092.33, samples=2 00:09:24.131 iops : min= 2192, max= 2932, avg=2562.00, stdev=523.26, samples=2 00:09:24.131 lat (msec) : 10=0.15%, 20=18.10%, 50=81.48%, 100=0.27% 00:09:24.131 cpu : usr=2.49%, sys=6.67%, ctx=669, majf=0, minf=17 00:09:24.131 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:09:24.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.131 issued rwts: total=2241,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.131 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.131 job1: (groupid=0, jobs=1): err= 0: pid=66505: Wed Nov 20 16:54:42 2024 00:09:24.131 read: IOPS=1918, BW=7674KiB/s (7858kB/s)(7712KiB/1005msec) 00:09:24.131 slat (usec): min=5, max=9739, avg=252.25, stdev=920.95 00:09:24.131 clat (usec): min=3197, max=45869, avg=31020.56, stdev=5762.33 00:09:24.131 lat (usec): min=7422, max=48882, avg=31272.81, stdev=5802.75 00:09:24.131 clat percentiles (usec): 00:09:24.131 | 1.00th=[ 7767], 5.00th=[22676], 10.00th=[25560], 20.00th=[27919], 00:09:24.131 | 30.00th=[29230], 40.00th=[30278], 50.00th=[30802], 60.00th=[32113], 00:09:24.131 | 70.00th=[33817], 80.00th=[35390], 90.00th=[38536], 95.00th=[40109], 00:09:24.131 | 99.00th=[42730], 99.50th=[42730], 99.90th=[45876], 99.95th=[45876], 00:09:24.131 | 99.99th=[45876] 00:09:24.131 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:09:24.131 slat (usec): min=12, max=9992, avg=242.96, stdev=820.54 00:09:24.131 clat (usec): min=15264, max=66668, avg=32655.89, stdev=10845.53 00:09:24.131 lat (usec): min=15286, max=66714, avg=32898.85, stdev=10922.63 00:09:24.131 clat percentiles (usec): 00:09:24.131 | 1.00th=[16909], 5.00th=[20579], 10.00th=[21627], 20.00th=[21890], 00:09:24.131 | 30.00th=[23987], 40.00th=[28443], 50.00th=[31327], 60.00th=[32637], 00:09:24.131 | 70.00th=[35914], 80.00th=[42206], 90.00th=[50594], 95.00th=[53740], 00:09:24.131 | 99.00th=[59507], 99.50th=[60556], 99.90th=[61604], 99.95th=[63177], 00:09:24.131 | 99.99th=[66847] 00:09:24.131 bw ( KiB/s): min= 7264, max= 9120, per=22.60%, avg=8192.00, stdev=1312.39, samples=2 00:09:24.131 iops : min= 1816, max= 2280, avg=2048.00, stdev=328.10, samples=2 00:09:24.131 lat (msec) : 4=0.03%, 10=0.53%, 20=2.67%, 50=91.52%, 100=5.26% 00:09:24.132 cpu : usr=1.00%, sys=7.77%, ctx=616, majf=0, minf=12 00:09:24.132 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:09:24.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.132 issued rwts: total=1928,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.132 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.132 job2: (groupid=0, jobs=1): err= 0: pid=66506: Wed Nov 20 16:54:42 2024 00:09:24.132 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:09:24.132 slat (usec): min=6, max=11233, avg=221.74, stdev=845.05 00:09:24.132 clat (usec): min=18855, max=42762, avg=28314.82, stdev=4461.38 00:09:24.132 lat (usec): min=18869, max=42931, avg=28536.56, stdev=4484.29 00:09:24.132 clat percentiles (usec): 00:09:24.132 | 1.00th=[19006], 5.00th=[21365], 10.00th=[22938], 20.00th=[24511], 00:09:24.132 | 30.00th=[25560], 40.00th=[26870], 50.00th=[27919], 60.00th=[29492], 00:09:24.132 | 70.00th=[30278], 80.00th=[31589], 90.00th=[34341], 95.00th=[36963], 00:09:24.132 | 99.00th=[39584], 99.50th=[39584], 99.90th=[41681], 99.95th=[41681], 00:09:24.132 | 99.99th=[42730] 00:09:24.132 write: IOPS=2451, BW=9805KiB/s (10.0MB/s)(9844KiB/1004msec); 0 zone resets 00:09:24.132 slat (usec): min=8, max=10214, avg=214.59, stdev=761.12 00:09:24.132 clat (usec): min=3032, max=47528, avg=27439.18, stdev=8310.49 00:09:24.132 lat (usec): min=3953, max=47546, avg=27653.77, stdev=8340.05 00:09:24.132 clat percentiles (usec): 00:09:24.132 | 1.00th=[ 5145], 5.00th=[15401], 10.00th=[17433], 20.00th=[20055], 00:09:24.132 | 30.00th=[22152], 40.00th=[23987], 50.00th=[26870], 60.00th=[30278], 00:09:24.132 | 70.00th=[32637], 80.00th=[34866], 90.00th=[37487], 95.00th=[41681], 00:09:24.132 | 99.00th=[43779], 99.50th=[47449], 99.90th=[47449], 99.95th=[47449], 00:09:24.132 | 99.99th=[47449] 00:09:24.132 bw ( KiB/s): min= 8208, max=10472, per=25.77%, avg=9340.00, stdev=1600.89, samples=2 00:09:24.132 iops : min= 2052, max= 2618, avg=2335.00, stdev=400.22, samples=2 00:09:24.132 lat (msec) : 4=0.11%, 10=0.80%, 20=10.76%, 50=88.33% 00:09:24.132 cpu : usr=2.09%, sys=6.58%, ctx=703, majf=0, minf=5 00:09:24.132 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:09:24.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.132 issued rwts: total=2048,2461,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.132 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.132 job3: (groupid=0, jobs=1): err= 0: pid=66507: Wed Nov 20 16:54:42 2024 00:09:24.132 read: IOPS=1698, BW=6793KiB/s (6956kB/s)(6820KiB/1004msec) 00:09:24.132 slat (usec): min=8, max=15543, avg=252.01, stdev=1003.88 00:09:24.132 clat (usec): min=1615, max=57468, avg=32631.44, stdev=7262.99 00:09:24.132 lat (usec): min=3946, max=58958, avg=32883.45, stdev=7289.79 00:09:24.132 clat percentiles (usec): 00:09:24.132 | 1.00th=[ 7898], 5.00th=[22414], 10.00th=[27132], 20.00th=[28967], 00:09:24.132 | 30.00th=[30016], 40.00th=[30278], 50.00th=[31589], 60.00th=[33162], 00:09:24.132 | 70.00th=[34341], 80.00th=[38536], 90.00th=[41681], 95.00th=[42206], 00:09:24.132 | 99.00th=[54264], 99.50th=[55313], 99.90th=[57410], 99.95th=[57410], 00:09:24.132 | 99.99th=[57410] 00:09:24.132 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:09:24.132 slat (usec): min=8, max=12106, avg=271.32, stdev=859.53 00:09:24.132 clat (usec): min=19323, max=65871, avg=34378.79, stdev=9390.80 00:09:24.132 lat (usec): min=20003, max=65913, avg=34650.12, stdev=9445.32 00:09:24.132 clat percentiles (usec): 00:09:24.132 | 1.00th=[21103], 5.00th=[23725], 10.00th=[24773], 20.00th=[26608], 00:09:24.132 | 30.00th=[28443], 40.00th=[30016], 50.00th=[31589], 60.00th=[33817], 00:09:24.132 | 70.00th=[36439], 80.00th=[43254], 90.00th=[47973], 95.00th=[53740], 00:09:24.132 | 99.00th=[61604], 99.50th=[63177], 99.90th=[64226], 99.95th=[64226], 00:09:24.132 | 99.99th=[65799] 00:09:24.132 bw ( KiB/s): min= 8192, max= 8192, per=22.60%, avg=8192.00, stdev= 0.00, samples=2 00:09:24.132 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:09:24.132 lat (msec) : 2=0.03%, 4=0.03%, 10=0.72%, 20=1.28%, 50=91.87% 00:09:24.132 lat (msec) : 100=6.08% 00:09:24.132 cpu : usr=2.09%, sys=5.98%, ctx=724, majf=0, minf=13 00:09:24.132 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:09:24.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.132 issued rwts: total=1705,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.132 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.132 00:09:24.132 Run status group 0 (all jobs): 00:09:24.132 READ: bw=30.8MiB/s (32.3MB/s), 6793KiB/s-8911KiB/s (6956kB/s-9124kB/s), io=30.9MiB (32.4MB), run=1004-1006msec 00:09:24.132 WRITE: bw=35.4MiB/s (37.1MB/s), 8151KiB/s-9.94MiB/s (8347kB/s-10.4MB/s), io=35.6MiB (37.3MB), run=1004-1006msec 00:09:24.132 00:09:24.132 Disk stats (read/write): 00:09:24.132 nvme0n1: ios=2098/2170, merge=0/0, ticks=19234/15136, in_queue=34370, util=88.57% 00:09:24.132 nvme0n2: ios=1584/1885, merge=0/0, ticks=16410/18196, in_queue=34606, util=89.08% 00:09:24.132 nvme0n3: ios=1863/2048, merge=0/0, ticks=17105/16824, in_queue=33929, util=88.99% 00:09:24.132 nvme0n4: ios=1536/1607, merge=0/0, ticks=16129/18195, in_queue=34324, util=89.32% 00:09:24.132 16:54:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:24.132 [global] 00:09:24.132 thread=1 00:09:24.132 invalidate=1 00:09:24.132 rw=randwrite 00:09:24.132 time_based=1 00:09:24.132 runtime=1 00:09:24.132 ioengine=libaio 00:09:24.132 direct=1 00:09:24.132 bs=4096 00:09:24.132 iodepth=128 00:09:24.132 norandommap=0 00:09:24.132 numjobs=1 00:09:24.132 00:09:24.132 verify_dump=1 00:09:24.132 verify_backlog=512 00:09:24.132 verify_state_save=0 00:09:24.132 do_verify=1 00:09:24.132 verify=crc32c-intel 00:09:24.132 [job0] 00:09:24.132 filename=/dev/nvme0n1 00:09:24.132 [job1] 00:09:24.132 filename=/dev/nvme0n2 00:09:24.132 [job2] 00:09:24.132 filename=/dev/nvme0n3 00:09:24.132 [job3] 00:09:24.132 filename=/dev/nvme0n4 00:09:24.132 Could not set queue depth (nvme0n1) 00:09:24.132 Could not set queue depth (nvme0n2) 00:09:24.132 Could not set queue depth (nvme0n3) 00:09:24.132 Could not set queue depth (nvme0n4) 00:09:24.132 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:24.132 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:24.133 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:24.133 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:24.133 fio-3.35 00:09:24.133 Starting 4 threads 00:09:25.509 00:09:25.509 job0: (groupid=0, jobs=1): err= 0: pid=66561: Wed Nov 20 16:54:43 2024 00:09:25.509 read: IOPS=5741, BW=22.4MiB/s (23.5MB/s)(22.5MiB/1003msec) 00:09:25.509 slat (usec): min=7, max=5220, avg=79.54, stdev=491.45 00:09:25.509 clat (usec): min=1453, max=18158, avg=11145.50, stdev=1308.05 00:09:25.509 lat (usec): min=6238, max=21662, avg=11225.05, stdev=1332.65 00:09:25.509 clat percentiles (usec): 00:09:25.509 | 1.00th=[ 6915], 5.00th=[ 9241], 10.00th=[10421], 20.00th=[10683], 00:09:25.509 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:09:25.509 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11994], 95.00th=[12256], 00:09:25.509 | 99.00th=[16909], 99.50th=[17433], 99.90th=[18220], 99.95th=[18220], 00:09:25.509 | 99.99th=[18220] 00:09:25.509 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:09:25.509 slat (usec): min=10, max=6378, avg=81.39, stdev=467.97 00:09:25.509 clat (usec): min=5353, max=14093, avg=10232.86, stdev=915.29 00:09:25.509 lat (usec): min=7341, max=14363, avg=10314.25, stdev=813.61 00:09:25.509 clat percentiles (usec): 00:09:25.509 | 1.00th=[ 6915], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:09:25.509 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:09:25.509 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11076], 95.00th=[11338], 00:09:25.509 | 99.00th=[13304], 99.50th=[13435], 99.90th=[14091], 99.95th=[14091], 00:09:25.509 | 99.99th=[14091] 00:09:25.509 bw ( KiB/s): min=24568, max=24625, per=34.75%, avg=24596.50, stdev=40.31, samples=2 00:09:25.509 iops : min= 6142, max= 6156, avg=6149.00, stdev= 9.90, samples=2 00:09:25.509 lat (msec) : 2=0.01%, 10=18.71%, 20=81.28% 00:09:25.509 cpu : usr=5.29%, sys=14.27%, ctx=242, majf=0, minf=10 00:09:25.509 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:25.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:25.509 issued rwts: total=5759,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:25.509 job1: (groupid=0, jobs=1): err= 0: pid=66562: Wed Nov 20 16:54:43 2024 00:09:25.509 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:09:25.509 slat (usec): min=4, max=5723, avg=81.05, stdev=486.19 00:09:25.509 clat (usec): min=6493, max=19075, avg=11400.73, stdev=1209.09 00:09:25.509 lat (usec): min=6508, max=22527, avg=11481.78, stdev=1218.54 00:09:25.509 clat percentiles (usec): 00:09:25.509 | 1.00th=[ 7308], 5.00th=[10159], 10.00th=[10683], 20.00th=[10945], 00:09:25.509 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:09:25.509 | 70.00th=[11731], 80.00th=[11863], 90.00th=[12256], 95.00th=[12387], 00:09:25.509 | 99.00th=[16909], 99.50th=[17695], 99.90th=[19006], 99.95th=[19006], 00:09:25.509 | 99.99th=[19006] 00:09:25.509 write: IOPS=6116, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:09:25.509 slat (usec): min=10, max=6710, avg=81.67, stdev=473.62 00:09:25.509 clat (usec): min=604, max=15119, avg=10265.96, stdev=1130.11 00:09:25.509 lat (usec): min=4548, max=15194, avg=10347.63, stdev=1053.39 00:09:25.509 clat percentiles (usec): 00:09:25.509 | 1.00th=[ 5538], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9634], 00:09:25.510 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:09:25.510 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11207], 95.00th=[11469], 00:09:25.510 | 99.00th=[13698], 99.50th=[13960], 99.90th=[14484], 99.95th=[14615], 00:09:25.510 | 99.99th=[15139] 00:09:25.510 bw ( KiB/s): min=23528, max=24576, per=33.98%, avg=24052.00, stdev=741.05, samples=2 00:09:25.510 iops : min= 5882, max= 6144, avg=6013.00, stdev=185.26, samples=2 00:09:25.510 lat (usec) : 750=0.01% 00:09:25.510 lat (msec) : 10=19.84%, 20=80.15% 00:09:25.510 cpu : usr=4.79%, sys=15.25%, ctx=250, majf=0, minf=9 00:09:25.510 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:25.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:25.510 issued rwts: total=5632,6141,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:25.510 job2: (groupid=0, jobs=1): err= 0: pid=66563: Wed Nov 20 16:54:43 2024 00:09:25.510 read: IOPS=2230, BW=8921KiB/s (9135kB/s)(8948KiB/1003msec) 00:09:25.510 slat (usec): min=7, max=14993, avg=205.78, stdev=1011.27 00:09:25.510 clat (usec): min=693, max=55437, avg=24868.37, stdev=6252.94 00:09:25.510 lat (usec): min=13220, max=55461, avg=25074.15, stdev=6302.37 00:09:25.510 clat percentiles (usec): 00:09:25.510 | 1.00th=[13698], 5.00th=[16712], 10.00th=[20317], 20.00th=[21627], 00:09:25.510 | 30.00th=[22152], 40.00th=[22414], 50.00th=[22676], 60.00th=[23725], 00:09:25.510 | 70.00th=[26608], 80.00th=[28705], 90.00th=[30802], 95.00th=[35390], 00:09:25.510 | 99.00th=[51643], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:09:25.510 | 99.99th=[55313] 00:09:25.510 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:09:25.510 slat (usec): min=11, max=7921, avg=203.00, stdev=860.71 00:09:25.510 clat (usec): min=9644, max=63241, avg=27698.28, stdev=14676.62 00:09:25.510 lat (usec): min=9668, max=63277, avg=27901.28, stdev=14774.68 00:09:25.510 clat percentiles (usec): 00:09:25.510 | 1.00th=[12387], 5.00th=[14746], 10.00th=[15008], 20.00th=[15270], 00:09:25.510 | 30.00th=[15664], 40.00th=[17433], 50.00th=[18744], 60.00th=[24773], 00:09:25.510 | 70.00th=[38011], 80.00th=[42206], 90.00th=[50594], 95.00th=[55837], 00:09:25.510 | 99.00th=[61080], 99.50th=[61604], 99.90th=[63177], 99.95th=[63177], 00:09:25.510 | 99.99th=[63177] 00:09:25.510 bw ( KiB/s): min= 8192, max=12312, per=14.48%, avg=10252.00, stdev=2913.28, samples=2 00:09:25.510 iops : min= 2048, max= 3078, avg=2563.00, stdev=728.32, samples=2 00:09:25.510 lat (usec) : 750=0.02% 00:09:25.510 lat (msec) : 10=0.10%, 20=34.19%, 50=59.33%, 100=6.36% 00:09:25.510 cpu : usr=2.10%, sys=7.98%, ctx=249, majf=0, minf=7 00:09:25.510 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:09:25.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:25.510 issued rwts: total=2237,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:25.510 job3: (groupid=0, jobs=1): err= 0: pid=66564: Wed Nov 20 16:54:43 2024 00:09:25.510 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:09:25.510 slat (usec): min=5, max=15131, avg=207.00, stdev=1204.77 00:09:25.510 clat (usec): min=13581, max=61538, avg=25902.34, stdev=10190.67 00:09:25.510 lat (usec): min=16440, max=61555, avg=26109.34, stdev=10213.25 00:09:25.510 clat percentiles (usec): 00:09:25.510 | 1.00th=[15008], 5.00th=[17433], 10.00th=[18220], 20.00th=[18744], 00:09:25.510 | 30.00th=[19006], 40.00th=[19268], 50.00th=[20055], 60.00th=[26084], 00:09:25.510 | 70.00th=[29492], 80.00th=[30802], 90.00th=[39060], 95.00th=[49021], 00:09:25.510 | 99.00th=[61604], 99.50th=[61604], 99.90th=[61604], 99.95th=[61604], 00:09:25.510 | 99.99th=[61604] 00:09:25.510 write: IOPS=2916, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1002msec); 0 zone resets 00:09:25.510 slat (usec): min=11, max=8522, avg=153.02, stdev=788.70 00:09:25.510 clat (usec): min=1078, max=51559, avg=20208.37, stdev=7073.90 00:09:25.510 lat (usec): min=1101, max=51598, avg=20361.38, stdev=7057.11 00:09:25.510 clat percentiles (usec): 00:09:25.510 | 1.00th=[ 4555], 5.00th=[14746], 10.00th=[15008], 20.00th=[15270], 00:09:25.510 | 30.00th=[15664], 40.00th=[16581], 50.00th=[18482], 60.00th=[19792], 00:09:25.510 | 70.00th=[20841], 80.00th=[23725], 90.00th=[30016], 95.00th=[35914], 00:09:25.510 | 99.00th=[51119], 99.50th=[51119], 99.90th=[51643], 99.95th=[51643], 00:09:25.510 | 99.99th=[51643] 00:09:25.510 bw ( KiB/s): min= 8208, max=14168, per=15.81%, avg=11188.00, stdev=4214.36, samples=2 00:09:25.510 iops : min= 2052, max= 3542, avg=2797.00, stdev=1053.59, samples=2 00:09:25.510 lat (msec) : 2=0.18%, 4=0.09%, 10=0.49%, 20=55.33%, 50=41.08% 00:09:25.510 lat (msec) : 100=2.83% 00:09:25.510 cpu : usr=2.00%, sys=8.79%, ctx=190, majf=0, minf=13 00:09:25.510 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:25.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:25.510 issued rwts: total=2560,2922,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:25.510 00:09:25.510 Run status group 0 (all jobs): 00:09:25.510 READ: bw=63.0MiB/s (66.0MB/s), 8921KiB/s-22.4MiB/s (9135kB/s-23.5MB/s), io=63.2MiB (66.3MB), run=1002-1004msec 00:09:25.510 WRITE: bw=69.1MiB/s (72.5MB/s), 9.97MiB/s-23.9MiB/s (10.5MB/s-25.1MB/s), io=69.4MiB (72.8MB), run=1002-1004msec 00:09:25.510 00:09:25.510 Disk stats (read/write): 00:09:25.510 nvme0n1: ios=4990/5120, merge=0/0, ticks=51983/48764, in_queue=100747, util=87.58% 00:09:25.510 nvme0n2: ios=4845/5120, merge=0/0, ticks=51676/48530, in_queue=100206, util=88.26% 00:09:25.510 nvme0n3: ios=2048/2247, merge=0/0, ticks=24831/25633, in_queue=50464, util=89.20% 00:09:25.510 nvme0n4: ios=2048/2432, merge=0/0, ticks=14345/11121, in_queue=25466, util=89.56% 00:09:25.510 16:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:25.510 16:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66577 00:09:25.510 16:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:25.510 16:54:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:25.510 [global] 00:09:25.510 thread=1 00:09:25.510 invalidate=1 00:09:25.510 rw=read 00:09:25.510 time_based=1 00:09:25.510 runtime=10 00:09:25.510 ioengine=libaio 00:09:25.510 direct=1 00:09:25.510 bs=4096 00:09:25.510 iodepth=1 00:09:25.510 norandommap=1 00:09:25.510 numjobs=1 00:09:25.510 00:09:25.510 [job0] 00:09:25.510 filename=/dev/nvme0n1 00:09:25.510 [job1] 00:09:25.510 filename=/dev/nvme0n2 00:09:25.510 [job2] 00:09:25.510 filename=/dev/nvme0n3 00:09:25.510 [job3] 00:09:25.510 filename=/dev/nvme0n4 00:09:25.510 Could not set queue depth (nvme0n1) 00:09:25.510 Could not set queue depth (nvme0n2) 00:09:25.510 Could not set queue depth (nvme0n3) 00:09:25.510 Could not set queue depth (nvme0n4) 00:09:25.510 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:25.510 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:25.510 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:25.510 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:25.510 fio-3.35 00:09:25.510 Starting 4 threads 00:09:28.795 16:54:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:28.795 fio: pid=66622, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:28.795 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=62492672, buflen=4096 00:09:28.795 16:54:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:28.795 fio: pid=66620, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:28.795 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=69259264, buflen=4096 00:09:28.795 16:54:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:28.795 16:54:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:29.053 fio: pid=66617, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:29.053 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=56184832, buflen=4096 00:09:29.053 16:54:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:29.053 16:54:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:29.371 fio: pid=66618, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:29.371 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=62787584, buflen=4096 00:09:29.371 00:09:29.371 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66617: Wed Nov 20 16:54:47 2024 00:09:29.371 read: IOPS=3919, BW=15.3MiB/s (16.1MB/s)(53.6MiB/3500msec) 00:09:29.371 slat (usec): min=7, max=14816, avg=14.45, stdev=192.10 00:09:29.371 clat (usec): min=123, max=7441, avg=239.60, stdev=109.18 00:09:29.371 lat (usec): min=136, max=15061, avg=254.05, stdev=220.97 00:09:29.371 clat percentiles (usec): 00:09:29.371 | 1.00th=[ 137], 5.00th=[ 147], 10.00th=[ 172], 20.00th=[ 219], 00:09:29.371 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 241], 00:09:29.371 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 277], 95.00th=[ 330], 00:09:29.371 | 99.00th=[ 371], 99.50th=[ 445], 99.90th=[ 1352], 99.95th=[ 2474], 00:09:29.371 | 99.99th=[ 4228] 00:09:29.371 bw ( KiB/s): min=11704, max=16216, per=23.37%, avg=15137.33, stdev=1748.67, samples=6 00:09:29.371 iops : min= 2926, max= 4054, avg=3784.33, stdev=437.17, samples=6 00:09:29.371 lat (usec) : 250=74.92%, 500=24.70%, 750=0.20%, 1000=0.04% 00:09:29.371 lat (msec) : 2=0.07%, 4=0.04%, 10=0.02% 00:09:29.371 cpu : usr=1.20%, sys=3.92%, ctx=13730, majf=0, minf=1 00:09:29.371 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.372 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.372 issued rwts: total=13718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.372 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.372 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66618: Wed Nov 20 16:54:47 2024 00:09:29.372 read: IOPS=4054, BW=15.8MiB/s (16.6MB/s)(59.9MiB/3781msec) 00:09:29.372 slat (usec): min=7, max=12503, avg=16.62, stdev=191.34 00:09:29.372 clat (usec): min=118, max=25384, avg=228.62, stdev=215.51 00:09:29.372 lat (usec): min=130, max=25398, avg=245.24, stdev=288.28 00:09:29.372 clat percentiles (usec): 00:09:29.372 | 1.00th=[ 128], 5.00th=[ 139], 10.00th=[ 147], 20.00th=[ 186], 00:09:29.372 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 235], 00:09:29.372 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 326], 00:09:29.372 | 99.00th=[ 383], 99.50th=[ 519], 99.90th=[ 848], 99.95th=[ 1532], 00:09:29.372 | 99.99th=[ 3261] 00:09:29.372 bw ( KiB/s): min=12280, max=18133, per=24.38%, avg=15789.29, stdev=1739.44, samples=7 00:09:29.372 iops : min= 3070, max= 4533, avg=3947.29, stdev=434.80, samples=7 00:09:29.372 lat (usec) : 250=80.53%, 500=18.86%, 750=0.49%, 1000=0.05% 00:09:29.372 lat (msec) : 2=0.03%, 4=0.03%, 50=0.01% 00:09:29.372 cpu : usr=1.69%, sys=4.74%, ctx=15337, majf=0, minf=2 00:09:29.372 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.372 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.372 issued rwts: total=15330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.372 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.372 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66620: Wed Nov 20 16:54:47 2024 00:09:29.372 read: IOPS=5223, BW=20.4MiB/s (21.4MB/s)(66.1MiB/3237msec) 00:09:29.372 slat (usec): min=10, max=8830, avg=15.66, stdev=94.10 00:09:29.372 clat (usec): min=135, max=2051, avg=174.31, stdev=29.24 00:09:29.372 lat (usec): min=148, max=9026, avg=189.97, stdev=98.83 00:09:29.372 clat percentiles (usec): 00:09:29.372 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:09:29.372 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:09:29.372 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 202], 00:09:29.372 | 99.00th=[ 219], 99.50th=[ 229], 99.90th=[ 478], 99.95th=[ 660], 00:09:29.372 | 99.99th=[ 1827] 00:09:29.372 bw ( KiB/s): min=20824, max=21288, per=32.48%, avg=21034.67, stdev=196.78, samples=6 00:09:29.372 iops : min= 5206, max= 5322, avg=5258.67, stdev=49.20, samples=6 00:09:29.372 lat (usec) : 250=99.73%, 500=0.17%, 750=0.06%, 1000=0.01% 00:09:29.372 lat (msec) : 2=0.01%, 4=0.01% 00:09:29.372 cpu : usr=1.39%, sys=7.23%, ctx=16913, majf=0, minf=1 00:09:29.372 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.372 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.372 issued rwts: total=16910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.372 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.372 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66622: Wed Nov 20 16:54:47 2024 00:09:29.372 read: IOPS=5118, BW=20.0MiB/s (21.0MB/s)(59.6MiB/2981msec) 00:09:29.372 slat (nsec): min=9586, max=67823, avg=12761.09, stdev=2869.88 00:09:29.372 clat (usec): min=140, max=7510, avg=181.28, stdev=90.94 00:09:29.372 lat (usec): min=152, max=7525, avg=194.04, stdev=91.02 00:09:29.372 clat percentiles (usec): 00:09:29.372 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:09:29.372 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:09:29.372 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 198], 95.00th=[ 330], 00:09:29.372 | 99.00th=[ 367], 99.50th=[ 379], 99.90th=[ 515], 99.95th=[ 963], 00:09:29.372 | 99.99th=[ 3949] 00:09:29.372 bw ( KiB/s): min=13248, max=22368, per=31.34%, avg=20292.80, stdev=3944.56, samples=5 00:09:29.372 iops : min= 3312, max= 5592, avg=5073.20, stdev=986.14, samples=5 00:09:29.372 lat (usec) : 250=93.11%, 500=6.77%, 750=0.05%, 1000=0.02% 00:09:29.372 lat (msec) : 2=0.01%, 4=0.03%, 10=0.01% 00:09:29.372 cpu : usr=1.44%, sys=5.81%, ctx=15262, majf=0, minf=2 00:09:29.372 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.372 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.372 issued rwts: total=15258,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.372 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.372 00:09:29.372 Run status group 0 (all jobs): 00:09:29.372 READ: bw=63.2MiB/s (66.3MB/s), 15.3MiB/s-20.4MiB/s (16.1MB/s-21.4MB/s), io=239MiB (251MB), run=2981-3781msec 00:09:29.372 00:09:29.372 Disk stats (read/write): 00:09:29.372 nvme0n1: ios=12921/0, merge=0/0, ticks=2981/0, in_queue=2981, util=94.99% 00:09:29.372 nvme0n2: ios=14328/0, merge=0/0, ticks=3314/0, in_queue=3314, util=95.50% 00:09:29.372 nvme0n3: ios=16275/0, merge=0/0, ticks=2889/0, in_queue=2889, util=96.40% 00:09:29.372 nvme0n4: ios=14675/0, merge=0/0, ticks=2677/0, in_queue=2677, util=96.53% 00:09:29.372 16:54:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:29.372 16:54:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:29.666 16:54:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:29.666 16:54:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:29.925 16:54:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:29.925 16:54:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:30.492 16:54:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:30.492 16:54:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:30.492 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:30.492 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:31.059 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:31.059 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66577 00:09:31.059 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:31.059 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:31.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.059 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:31.059 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:31.059 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:31.059 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.059 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:31.059 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.059 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:31.059 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:31.059 nvmf hotplug test: fio failed as expected 00:09:31.059 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:31.059 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:31.318 rmmod nvme_tcp 00:09:31.318 rmmod nvme_fabrics 00:09:31.318 rmmod nvme_keyring 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66197 ']' 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66197 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66197 ']' 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66197 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66197 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.318 killing process with pid 66197 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66197' 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66197 00:09:31.318 16:54:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66197 00:09:31.578 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:31.578 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:31.578 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:31.578 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:31.578 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:31.578 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:31.578 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:31.578 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:31.578 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:31.578 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:31.578 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:31.578 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:31.578 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:31.578 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:31.578 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:31.578 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:31.578 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:31.578 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:31.837 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:31.837 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:31.837 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:31.837 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:31.837 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:31.837 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.837 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.837 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.837 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:09:31.837 00:09:31.837 real 0m20.155s 00:09:31.837 user 1m16.719s 00:09:31.837 sys 0m9.831s 00:09:31.837 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.837 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.837 ************************************ 00:09:31.837 END TEST nvmf_fio_target 00:09:31.837 ************************************ 00:09:31.837 16:54:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:31.837 16:54:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:31.837 16:54:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.837 16:54:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:31.837 ************************************ 00:09:31.837 START TEST nvmf_bdevio 00:09:31.837 ************************************ 00:09:31.837 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:31.837 * Looking for test storage... 00:09:31.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:31.837 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:31.837 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:31.837 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:32.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.097 --rc genhtml_branch_coverage=1 00:09:32.097 --rc genhtml_function_coverage=1 00:09:32.097 --rc genhtml_legend=1 00:09:32.097 --rc geninfo_all_blocks=1 00:09:32.097 --rc geninfo_unexecuted_blocks=1 00:09:32.097 00:09:32.097 ' 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:32.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.097 --rc genhtml_branch_coverage=1 00:09:32.097 --rc genhtml_function_coverage=1 00:09:32.097 --rc genhtml_legend=1 00:09:32.097 --rc geninfo_all_blocks=1 00:09:32.097 --rc geninfo_unexecuted_blocks=1 00:09:32.097 00:09:32.097 ' 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:32.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.097 --rc genhtml_branch_coverage=1 00:09:32.097 --rc genhtml_function_coverage=1 00:09:32.097 --rc genhtml_legend=1 00:09:32.097 --rc geninfo_all_blocks=1 00:09:32.097 --rc geninfo_unexecuted_blocks=1 00:09:32.097 00:09:32.097 ' 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:32.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.097 --rc genhtml_branch_coverage=1 00:09:32.097 --rc genhtml_function_coverage=1 00:09:32.097 --rc genhtml_legend=1 00:09:32.097 --rc geninfo_all_blocks=1 00:09:32.097 --rc geninfo_unexecuted_blocks=1 00:09:32.097 00:09:32.097 ' 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:32.097 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:32.097 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:32.098 Cannot find device "nvmf_init_br" 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:32.098 Cannot find device "nvmf_init_br2" 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:32.098 Cannot find device "nvmf_tgt_br" 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:32.098 Cannot find device "nvmf_tgt_br2" 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:32.098 Cannot find device "nvmf_init_br" 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:32.098 Cannot find device "nvmf_init_br2" 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:32.098 Cannot find device "nvmf_tgt_br" 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:32.098 Cannot find device "nvmf_tgt_br2" 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:32.098 Cannot find device "nvmf_br" 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:32.098 Cannot find device "nvmf_init_if" 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:32.098 Cannot find device "nvmf_init_if2" 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:32.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:32.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:32.098 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:32.357 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:32.616 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:32.616 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:09:32.616 00:09:32.616 --- 10.0.0.3 ping statistics --- 00:09:32.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.616 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:32.616 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:32.616 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:09:32.616 00:09:32.616 --- 10.0.0.4 ping statistics --- 00:09:32.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.616 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:32.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:09:32.616 00:09:32.616 --- 10.0.0.1 ping statistics --- 00:09:32.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.616 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:32.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:09:32.616 00:09:32.616 --- 10.0.0.2 ping statistics --- 00:09:32.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.616 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66943 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66943 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 66943 ']' 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.616 16:54:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:32.616 [2024-11-20 16:54:51.005605] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:09:32.616 [2024-11-20 16:54:51.005724] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.875 [2024-11-20 16:54:51.158047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:32.875 [2024-11-20 16:54:51.242323] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.875 [2024-11-20 16:54:51.242399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.875 [2024-11-20 16:54:51.242411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.875 [2024-11-20 16:54:51.242421] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.875 [2024-11-20 16:54:51.242428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.875 [2024-11-20 16:54:51.244579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:32.875 [2024-11-20 16:54:51.244707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:32.875 [2024-11-20 16:54:51.244816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:32.875 [2024-11-20 16:54:51.244840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.875 [2024-11-20 16:54:51.317897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:32.875 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.875 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:32.875 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:32.875 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:32.875 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.134 [2024-11-20 16:54:51.452220] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.134 Malloc0 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.134 [2024-11-20 16:54:51.537016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:33.134 { 00:09:33.134 "params": { 00:09:33.134 "name": "Nvme$subsystem", 00:09:33.134 "trtype": "$TEST_TRANSPORT", 00:09:33.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:33.134 "adrfam": "ipv4", 00:09:33.134 "trsvcid": "$NVMF_PORT", 00:09:33.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:33.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:33.134 "hdgst": ${hdgst:-false}, 00:09:33.134 "ddgst": ${ddgst:-false} 00:09:33.134 }, 00:09:33.134 "method": "bdev_nvme_attach_controller" 00:09:33.134 } 00:09:33.134 EOF 00:09:33.134 )") 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:33.134 16:54:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:33.134 "params": { 00:09:33.134 "name": "Nvme1", 00:09:33.134 "trtype": "tcp", 00:09:33.134 "traddr": "10.0.0.3", 00:09:33.134 "adrfam": "ipv4", 00:09:33.134 "trsvcid": "4420", 00:09:33.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:33.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:33.134 "hdgst": false, 00:09:33.134 "ddgst": false 00:09:33.134 }, 00:09:33.134 "method": "bdev_nvme_attach_controller" 00:09:33.134 }' 00:09:33.134 [2024-11-20 16:54:51.597814] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:09:33.134 [2024-11-20 16:54:51.597930] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66973 ] 00:09:33.392 [2024-11-20 16:54:51.744710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:33.392 [2024-11-20 16:54:51.826555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.392 [2024-11-20 16:54:51.826704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.392 [2024-11-20 16:54:51.826702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.392 [2024-11-20 16:54:51.913560] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:33.651 I/O targets: 00:09:33.651 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:33.651 00:09:33.651 00:09:33.651 CUnit - A unit testing framework for C - Version 2.1-3 00:09:33.651 http://cunit.sourceforge.net/ 00:09:33.651 00:09:33.651 00:09:33.651 Suite: bdevio tests on: Nvme1n1 00:09:33.651 Test: blockdev write read block ...passed 00:09:33.651 Test: blockdev write zeroes read block ...passed 00:09:33.651 Test: blockdev write zeroes read no split ...passed 00:09:33.651 Test: blockdev write zeroes read split ...passed 00:09:33.651 Test: blockdev write zeroes read split partial ...passed 00:09:33.651 Test: blockdev reset ...[2024-11-20 16:54:52.095983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:33.651 [2024-11-20 16:54:52.096212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1910180 (9): Bad file descriptor 00:09:33.651 [2024-11-20 16:54:52.115713] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:33.651 passed 00:09:33.651 Test: blockdev write read 8 blocks ...passed 00:09:33.651 Test: blockdev write read size > 128k ...passed 00:09:33.651 Test: blockdev write read invalid size ...passed 00:09:33.651 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:33.651 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:33.651 Test: blockdev write read max offset ...passed 00:09:33.651 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:33.651 Test: blockdev writev readv 8 blocks ...passed 00:09:33.651 Test: blockdev writev readv 30 x 1block ...passed 00:09:33.651 Test: blockdev writev readv block ...passed 00:09:33.651 Test: blockdev writev readv size > 128k ...passed 00:09:33.651 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:33.651 Test: blockdev comparev and writev ...[2024-11-20 16:54:52.125604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:33.651 [2024-11-20 16:54:52.125795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:33.651 [2024-11-20 16:54:52.125983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:33.651 [2024-11-20 16:54:52.126106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:33.651 [2024-11-20 16:54:52.126620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:33.651 [2024-11-20 16:54:52.126778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:33.651 [2024-11-20 16:54:52.126935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:33.651 [2024-11-20 16:54:52.127053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:33.651 [2024-11-20 16:54:52.127560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:33.651 [2024-11-20 16:54:52.127697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:33.652 [2024-11-20 16:54:52.127844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:33.652 [2024-11-20 16:54:52.127976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:33.652 [2024-11-20 16:54:52.128532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:33.652 [2024-11-20 16:54:52.128669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:33.652 [2024-11-20 16:54:52.128779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:33.652 [2024-11-20 16:54:52.128932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:33.652 passed 00:09:33.652 Test: blockdev nvme passthru rw ...passed 00:09:33.652 Test: blockdev nvme passthru vendor specific ...[2024-11-20 16:54:52.130010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:33.652 [2024-11-20 16:54:52.130151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:33.652 [2024-11-20 16:54:52.130370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:33.652 [2024-11-20 16:54:52.130518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:33.652 [2024-11-20 16:54:52.130773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:33.652 [2024-11-20 16:54:52.130905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:33.652 [2024-11-20 16:54:52.131172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:33.652 [2024-11-20 16:54:52.131306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:33.652 passed 00:09:33.652 Test: blockdev nvme admin passthru ...passed 00:09:33.652 Test: blockdev copy ...passed 00:09:33.652 00:09:33.652 Run Summary: Type Total Ran Passed Failed Inactive 00:09:33.652 suites 1 1 n/a 0 0 00:09:33.652 tests 23 23 23 0 0 00:09:33.652 asserts 152 152 152 0 n/a 00:09:33.652 00:09:33.652 Elapsed time = 0.177 seconds 00:09:33.910 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:33.910 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.910 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.910 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.910 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:33.910 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:33.910 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:33.910 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:34.168 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:34.168 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:34.168 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:34.168 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:34.168 rmmod nvme_tcp 00:09:34.168 rmmod nvme_fabrics 00:09:34.168 rmmod nvme_keyring 00:09:34.168 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:34.168 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:34.168 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:34.168 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66943 ']' 00:09:34.168 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66943 00:09:34.168 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 66943 ']' 00:09:34.168 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 66943 00:09:34.168 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:34.168 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.168 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66943 00:09:34.168 killing process with pid 66943 00:09:34.168 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:34.168 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:34.168 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66943' 00:09:34.168 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 66943 00:09:34.168 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 66943 00:09:34.427 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:34.427 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:34.427 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:34.427 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:34.427 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:34.427 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:34.427 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:34.427 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:34.427 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:34.427 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:34.427 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:34.427 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:34.427 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:34.427 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:34.427 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:34.427 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:34.427 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:34.427 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:34.686 16:54:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:34.686 16:54:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:34.686 16:54:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:34.686 16:54:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:34.686 16:54:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:34.686 16:54:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.686 16:54:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.686 16:54:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.686 16:54:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:09:34.686 00:09:34.686 real 0m2.836s 00:09:34.686 user 0m7.923s 00:09:34.686 sys 0m0.989s 00:09:34.686 16:54:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.686 ************************************ 00:09:34.686 END TEST nvmf_bdevio 00:09:34.686 16:54:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:34.686 ************************************ 00:09:34.686 16:54:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:34.686 00:09:34.686 real 2m36.054s 00:09:34.686 user 6m47.912s 00:09:34.686 sys 0m51.803s 00:09:34.686 16:54:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.686 16:54:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:34.686 ************************************ 00:09:34.686 END TEST nvmf_target_core 00:09:34.686 ************************************ 00:09:34.686 16:54:53 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:34.686 16:54:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:34.686 16:54:53 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.686 16:54:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:34.686 ************************************ 00:09:34.686 START TEST nvmf_target_extra 00:09:34.686 ************************************ 00:09:34.686 16:54:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:34.946 * Looking for test storage... 00:09:34.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:34.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.946 --rc genhtml_branch_coverage=1 00:09:34.946 --rc genhtml_function_coverage=1 00:09:34.946 --rc genhtml_legend=1 00:09:34.946 --rc geninfo_all_blocks=1 00:09:34.946 --rc geninfo_unexecuted_blocks=1 00:09:34.946 00:09:34.946 ' 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:34.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.946 --rc genhtml_branch_coverage=1 00:09:34.946 --rc genhtml_function_coverage=1 00:09:34.946 --rc genhtml_legend=1 00:09:34.946 --rc geninfo_all_blocks=1 00:09:34.946 --rc geninfo_unexecuted_blocks=1 00:09:34.946 00:09:34.946 ' 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:34.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.946 --rc genhtml_branch_coverage=1 00:09:34.946 --rc genhtml_function_coverage=1 00:09:34.946 --rc genhtml_legend=1 00:09:34.946 --rc geninfo_all_blocks=1 00:09:34.946 --rc geninfo_unexecuted_blocks=1 00:09:34.946 00:09:34.946 ' 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:34.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.946 --rc genhtml_branch_coverage=1 00:09:34.946 --rc genhtml_function_coverage=1 00:09:34.946 --rc genhtml_legend=1 00:09:34.946 --rc geninfo_all_blocks=1 00:09:34.946 --rc geninfo_unexecuted_blocks=1 00:09:34.946 00:09:34.946 ' 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:34.946 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:34.946 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:34.947 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:09:34.947 16:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:34.947 16:54:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:34.947 16:54:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.947 16:54:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:34.947 ************************************ 00:09:34.947 START TEST nvmf_auth_target 00:09:34.947 ************************************ 00:09:34.947 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:35.211 * Looking for test storage... 00:09:35.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.211 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:35.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.212 --rc genhtml_branch_coverage=1 00:09:35.212 --rc genhtml_function_coverage=1 00:09:35.212 --rc genhtml_legend=1 00:09:35.212 --rc geninfo_all_blocks=1 00:09:35.212 --rc geninfo_unexecuted_blocks=1 00:09:35.212 00:09:35.212 ' 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:35.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.212 --rc genhtml_branch_coverage=1 00:09:35.212 --rc genhtml_function_coverage=1 00:09:35.212 --rc genhtml_legend=1 00:09:35.212 --rc geninfo_all_blocks=1 00:09:35.212 --rc geninfo_unexecuted_blocks=1 00:09:35.212 00:09:35.212 ' 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:35.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.212 --rc genhtml_branch_coverage=1 00:09:35.212 --rc genhtml_function_coverage=1 00:09:35.212 --rc genhtml_legend=1 00:09:35.212 --rc geninfo_all_blocks=1 00:09:35.212 --rc geninfo_unexecuted_blocks=1 00:09:35.212 00:09:35.212 ' 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:35.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.212 --rc genhtml_branch_coverage=1 00:09:35.212 --rc genhtml_function_coverage=1 00:09:35.212 --rc genhtml_legend=1 00:09:35.212 --rc geninfo_all_blocks=1 00:09:35.212 --rc geninfo_unexecuted_blocks=1 00:09:35.212 00:09:35.212 ' 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:35.212 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:35.212 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:35.213 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:35.213 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:35.213 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:35.213 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:35.213 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.213 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:35.213 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:35.213 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:35.213 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:35.213 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:35.213 Cannot find device "nvmf_init_br" 00:09:35.213 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:35.213 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:35.213 Cannot find device "nvmf_init_br2" 00:09:35.213 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:35.213 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:35.213 Cannot find device "nvmf_tgt_br" 00:09:35.213 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:09:35.213 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:35.213 Cannot find device "nvmf_tgt_br2" 00:09:35.213 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:09:35.213 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:35.213 Cannot find device "nvmf_init_br" 00:09:35.213 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:09:35.213 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:35.499 Cannot find device "nvmf_init_br2" 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:35.499 Cannot find device "nvmf_tgt_br" 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:35.499 Cannot find device "nvmf_tgt_br2" 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:35.499 Cannot find device "nvmf_br" 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:35.499 Cannot find device "nvmf_init_if" 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:35.499 Cannot find device "nvmf_init_if2" 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:35.499 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:35.499 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:35.499 16:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:35.499 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:35.499 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:35.758 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:35.758 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:35.758 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:35.758 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:35.758 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:35.758 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:35.758 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:35.758 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:35.758 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:09:35.758 00:09:35.758 --- 10.0.0.3 ping statistics --- 00:09:35.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.758 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:09:35.758 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:35.758 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:35.758 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.080 ms 00:09:35.758 00:09:35.758 --- 10.0.0.4 ping statistics --- 00:09:35.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.758 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:09:35.758 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:35.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:09:35.758 00:09:35.758 --- 10.0.0.1 ping statistics --- 00:09:35.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.758 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:35.758 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:35.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:09:35.758 00:09:35.758 --- 10.0.0.2 ping statistics --- 00:09:35.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.758 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:35.758 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.758 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:09:35.758 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:35.758 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.758 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:35.758 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:35.758 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.758 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:35.758 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:35.759 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:09:35.759 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:35.759 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:35.759 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.759 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67259 00:09:35.759 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:09:35.759 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67259 00:09:35.759 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67259 ']' 00:09:35.759 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.759 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.759 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.759 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.759 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.018 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.018 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:09:36.018 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:36.018 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:36.018 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67289 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0ff532e8ed8c4369700d3420f8e8434bb1c8d45b50fbac8a 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.qRs 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0ff532e8ed8c4369700d3420f8e8434bb1c8d45b50fbac8a 0 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0ff532e8ed8c4369700d3420f8e8434bb1c8d45b50fbac8a 0 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0ff532e8ed8c4369700d3420f8e8434bb1c8d45b50fbac8a 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.qRs 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.qRs 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.qRs 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=caab6d20e1ae6d667b159521eff18a8e7733e84bfa0f16190641a9a4bbf8e755 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.iTT 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key caab6d20e1ae6d667b159521eff18a8e7733e84bfa0f16190641a9a4bbf8e755 3 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 caab6d20e1ae6d667b159521eff18a8e7733e84bfa0f16190641a9a4bbf8e755 3 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=caab6d20e1ae6d667b159521eff18a8e7733e84bfa0f16190641a9a4bbf8e755 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.iTT 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.iTT 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.iTT 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bb08191c852d15edaa4af71644ffbeca 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Cgx 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bb08191c852d15edaa4af71644ffbeca 1 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bb08191c852d15edaa4af71644ffbeca 1 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bb08191c852d15edaa4af71644ffbeca 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Cgx 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Cgx 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Cgx 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:36.278 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:36.279 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:09:36.279 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:36.279 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:36.279 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f890ed0d80742a329abdac3e71c70decbaba7edf6232c17e 00:09:36.279 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:09:36.279 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.57Z 00:09:36.279 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f890ed0d80742a329abdac3e71c70decbaba7edf6232c17e 2 00:09:36.279 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f890ed0d80742a329abdac3e71c70decbaba7edf6232c17e 2 00:09:36.279 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:36.279 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:36.279 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f890ed0d80742a329abdac3e71c70decbaba7edf6232c17e 00:09:36.279 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:09:36.279 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.57Z 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.57Z 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.57Z 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=180a1fc3e8bf54a9e7413e630d34dde950891ed76b53c6a7 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.fc6 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 180a1fc3e8bf54a9e7413e630d34dde950891ed76b53c6a7 2 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 180a1fc3e8bf54a9e7413e630d34dde950891ed76b53c6a7 2 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=180a1fc3e8bf54a9e7413e630d34dde950891ed76b53c6a7 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.fc6 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.fc6 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.fc6 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=68d3471dcd0ba37033dff3110c69b062 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.VGi 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 68d3471dcd0ba37033dff3110c69b062 1 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 68d3471dcd0ba37033dff3110c69b062 1 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=68d3471dcd0ba37033dff3110c69b062 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.VGi 00:09:36.538 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.VGi 00:09:36.539 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.VGi 00:09:36.539 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:09:36.539 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:36.539 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:36.539 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:36.539 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:09:36.539 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:09:36.539 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:36.539 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=51f8340c7f0eafd548d34e97b66793912e9f9129a8963a3e0cfcde4e7996335f 00:09:36.539 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:09:36.539 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.GBG 00:09:36.539 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 51f8340c7f0eafd548d34e97b66793912e9f9129a8963a3e0cfcde4e7996335f 3 00:09:36.539 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 51f8340c7f0eafd548d34e97b66793912e9f9129a8963a3e0cfcde4e7996335f 3 00:09:36.539 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:36.539 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:36.539 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=51f8340c7f0eafd548d34e97b66793912e9f9129a8963a3e0cfcde4e7996335f 00:09:36.539 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:09:36.539 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:36.539 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.GBG 00:09:36.539 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.GBG 00:09:36.539 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.GBG 00:09:36.539 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:09:36.539 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67259 00:09:36.539 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67259 ']' 00:09:36.539 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.539 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.539 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.539 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.539 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.107 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.107 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:09:37.107 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67289 /var/tmp/host.sock 00:09:37.107 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67289 ']' 00:09:37.107 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:09:37.107 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:37.107 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:37.107 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.107 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.107 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.107 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:09:37.107 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:09:37.107 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.107 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.366 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.366 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:37.366 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.qRs 00:09:37.366 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.366 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.366 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.366 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.qRs 00:09:37.366 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.qRs 00:09:37.625 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.iTT ]] 00:09:37.625 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iTT 00:09:37.625 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.625 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.625 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.625 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iTT 00:09:37.625 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iTT 00:09:37.883 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:37.883 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Cgx 00:09:37.883 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.883 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.883 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.883 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Cgx 00:09:37.883 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Cgx 00:09:38.142 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.57Z ]] 00:09:38.142 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.57Z 00:09:38.142 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.142 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.142 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.142 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.57Z 00:09:38.142 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.57Z 00:09:38.400 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:38.400 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.fc6 00:09:38.400 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.400 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.400 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.400 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.fc6 00:09:38.400 16:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.fc6 00:09:38.659 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.VGi ]] 00:09:38.659 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.VGi 00:09:38.659 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.659 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.659 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.659 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.VGi 00:09:38.659 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.VGi 00:09:39.225 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:39.225 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.GBG 00:09:39.225 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.225 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.225 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.225 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.GBG 00:09:39.225 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.GBG 00:09:39.484 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:09:39.484 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:09:39.484 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:39.484 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:39.484 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:39.484 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:39.744 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:09:39.744 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:39.744 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:39.744 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:39.744 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:39.744 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:39.744 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:39.744 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.744 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.744 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.744 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:39.744 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:39.744 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:40.003 00:09:40.003 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:40.003 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:40.003 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:40.570 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:40.571 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:40.571 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.571 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.571 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.571 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:40.571 { 00:09:40.571 "cntlid": 1, 00:09:40.571 "qid": 0, 00:09:40.571 "state": "enabled", 00:09:40.571 "thread": "nvmf_tgt_poll_group_000", 00:09:40.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:09:40.571 "listen_address": { 00:09:40.571 "trtype": "TCP", 00:09:40.571 "adrfam": "IPv4", 00:09:40.571 "traddr": "10.0.0.3", 00:09:40.571 "trsvcid": "4420" 00:09:40.571 }, 00:09:40.571 "peer_address": { 00:09:40.571 "trtype": "TCP", 00:09:40.571 "adrfam": "IPv4", 00:09:40.571 "traddr": "10.0.0.1", 00:09:40.571 "trsvcid": "55084" 00:09:40.571 }, 00:09:40.571 "auth": { 00:09:40.571 "state": "completed", 00:09:40.571 "digest": "sha256", 00:09:40.571 "dhgroup": "null" 00:09:40.571 } 00:09:40.571 } 00:09:40.571 ]' 00:09:40.571 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:40.571 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:40.571 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:40.571 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:40.571 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:40.571 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:40.571 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:40.571 16:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:40.829 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:09:40.829 16:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:09:46.124 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:46.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:46.124 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:09:46.124 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.124 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.124 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.124 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:46.124 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:46.124 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:46.124 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:09:46.124 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:46.124 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:46.124 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:46.124 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:46.124 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:46.125 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:46.125 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.125 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.125 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.125 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:46.125 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:46.125 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:46.125 00:09:46.125 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:46.125 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:46.125 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:46.383 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:46.383 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:46.383 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.383 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.640 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.640 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:46.640 { 00:09:46.640 "cntlid": 3, 00:09:46.640 "qid": 0, 00:09:46.640 "state": "enabled", 00:09:46.640 "thread": "nvmf_tgt_poll_group_000", 00:09:46.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:09:46.640 "listen_address": { 00:09:46.640 "trtype": "TCP", 00:09:46.640 "adrfam": "IPv4", 00:09:46.640 "traddr": "10.0.0.3", 00:09:46.640 "trsvcid": "4420" 00:09:46.640 }, 00:09:46.640 "peer_address": { 00:09:46.640 "trtype": "TCP", 00:09:46.640 "adrfam": "IPv4", 00:09:46.640 "traddr": "10.0.0.1", 00:09:46.640 "trsvcid": "55112" 00:09:46.640 }, 00:09:46.640 "auth": { 00:09:46.640 "state": "completed", 00:09:46.640 "digest": "sha256", 00:09:46.640 "dhgroup": "null" 00:09:46.640 } 00:09:46.640 } 00:09:46.640 ]' 00:09:46.640 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:46.640 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:46.640 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:46.640 16:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:46.640 16:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:46.640 16:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:46.640 16:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:46.640 16:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:46.897 16:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:09:46.897 16:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:09:47.509 16:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:47.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:47.509 16:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:09:47.509 16:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.509 16:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.509 16:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.509 16:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:47.509 16:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:47.509 16:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:48.091 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:09:48.091 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:48.091 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:48.091 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:48.091 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:48.091 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:48.091 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:48.091 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.091 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.091 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.091 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:48.091 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:48.091 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:48.349 00:09:48.349 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:48.349 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:48.349 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:48.607 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:48.607 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:48.607 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.607 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.607 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.607 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:48.607 { 00:09:48.607 "cntlid": 5, 00:09:48.607 "qid": 0, 00:09:48.608 "state": "enabled", 00:09:48.608 "thread": "nvmf_tgt_poll_group_000", 00:09:48.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:09:48.608 "listen_address": { 00:09:48.608 "trtype": "TCP", 00:09:48.608 "adrfam": "IPv4", 00:09:48.608 "traddr": "10.0.0.3", 00:09:48.608 "trsvcid": "4420" 00:09:48.608 }, 00:09:48.608 "peer_address": { 00:09:48.608 "trtype": "TCP", 00:09:48.608 "adrfam": "IPv4", 00:09:48.608 "traddr": "10.0.0.1", 00:09:48.608 "trsvcid": "55144" 00:09:48.608 }, 00:09:48.608 "auth": { 00:09:48.608 "state": "completed", 00:09:48.608 "digest": "sha256", 00:09:48.608 "dhgroup": "null" 00:09:48.608 } 00:09:48.608 } 00:09:48.608 ]' 00:09:48.608 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:48.608 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:48.608 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:48.608 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:48.608 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:48.608 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:48.608 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:48.608 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:48.865 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:09:48.865 16:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:09:49.799 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:49.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:49.799 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:09:49.799 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.799 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.799 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.799 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:49.799 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:49.799 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:49.799 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:09:49.799 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:49.799 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:49.799 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:49.799 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:49.799 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:49.799 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key3 00:09:49.799 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.799 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.799 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.799 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:49.799 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:49.799 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:50.366 00:09:50.366 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:50.366 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:50.366 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:50.625 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:50.625 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:50.625 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.625 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.625 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.625 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:50.625 { 00:09:50.625 "cntlid": 7, 00:09:50.625 "qid": 0, 00:09:50.625 "state": "enabled", 00:09:50.625 "thread": "nvmf_tgt_poll_group_000", 00:09:50.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:09:50.625 "listen_address": { 00:09:50.625 "trtype": "TCP", 00:09:50.625 "adrfam": "IPv4", 00:09:50.625 "traddr": "10.0.0.3", 00:09:50.625 "trsvcid": "4420" 00:09:50.625 }, 00:09:50.625 "peer_address": { 00:09:50.625 "trtype": "TCP", 00:09:50.625 "adrfam": "IPv4", 00:09:50.625 "traddr": "10.0.0.1", 00:09:50.625 "trsvcid": "37926" 00:09:50.625 }, 00:09:50.625 "auth": { 00:09:50.625 "state": "completed", 00:09:50.625 "digest": "sha256", 00:09:50.625 "dhgroup": "null" 00:09:50.625 } 00:09:50.625 } 00:09:50.625 ]' 00:09:50.625 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:50.625 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:50.625 16:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:50.625 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:50.625 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:50.625 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:50.625 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:50.625 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:50.896 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:09:50.896 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:09:51.465 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:51.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:51.465 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:09:51.465 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.465 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.465 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.465 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:51.465 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:51.465 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:51.465 16:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:52.032 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:09:52.032 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:52.032 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:52.032 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:52.032 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:52.032 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:52.032 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:52.032 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.032 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.032 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.032 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:52.032 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:52.032 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:52.290 00:09:52.290 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:52.290 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:52.290 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:52.549 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:52.549 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:52.549 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.549 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.549 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.549 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:52.549 { 00:09:52.549 "cntlid": 9, 00:09:52.549 "qid": 0, 00:09:52.549 "state": "enabled", 00:09:52.549 "thread": "nvmf_tgt_poll_group_000", 00:09:52.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:09:52.549 "listen_address": { 00:09:52.549 "trtype": "TCP", 00:09:52.549 "adrfam": "IPv4", 00:09:52.549 "traddr": "10.0.0.3", 00:09:52.549 "trsvcid": "4420" 00:09:52.549 }, 00:09:52.549 "peer_address": { 00:09:52.549 "trtype": "TCP", 00:09:52.549 "adrfam": "IPv4", 00:09:52.549 "traddr": "10.0.0.1", 00:09:52.549 "trsvcid": "37952" 00:09:52.549 }, 00:09:52.549 "auth": { 00:09:52.549 "state": "completed", 00:09:52.549 "digest": "sha256", 00:09:52.549 "dhgroup": "ffdhe2048" 00:09:52.549 } 00:09:52.549 } 00:09:52.549 ]' 00:09:52.549 16:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:52.549 16:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:52.549 16:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:52.549 16:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:52.549 16:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:52.807 16:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:52.807 16:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:52.807 16:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:53.065 16:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:09:53.066 16:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:09:53.633 16:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:53.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:53.633 16:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:09:53.633 16:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.633 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.633 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.633 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:53.633 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:53.633 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:53.892 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:09:53.892 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:53.892 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:53.892 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:53.892 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:53.892 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:53.892 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:53.892 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.892 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.892 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.892 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:53.892 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:53.892 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:54.162 00:09:54.162 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:54.162 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:54.162 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:54.442 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:54.442 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:54.442 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.442 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.442 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.442 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:54.442 { 00:09:54.442 "cntlid": 11, 00:09:54.442 "qid": 0, 00:09:54.442 "state": "enabled", 00:09:54.442 "thread": "nvmf_tgt_poll_group_000", 00:09:54.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:09:54.442 "listen_address": { 00:09:54.442 "trtype": "TCP", 00:09:54.442 "adrfam": "IPv4", 00:09:54.443 "traddr": "10.0.0.3", 00:09:54.443 "trsvcid": "4420" 00:09:54.443 }, 00:09:54.443 "peer_address": { 00:09:54.443 "trtype": "TCP", 00:09:54.443 "adrfam": "IPv4", 00:09:54.443 "traddr": "10.0.0.1", 00:09:54.443 "trsvcid": "37990" 00:09:54.443 }, 00:09:54.443 "auth": { 00:09:54.443 "state": "completed", 00:09:54.443 "digest": "sha256", 00:09:54.443 "dhgroup": "ffdhe2048" 00:09:54.443 } 00:09:54.443 } 00:09:54.443 ]' 00:09:54.443 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:54.705 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:54.705 16:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:54.705 16:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:54.705 16:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:54.705 16:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:54.705 16:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:54.705 16:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:54.963 16:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:09:54.963 16:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:09:55.529 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:55.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:55.788 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:09:55.788 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.788 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.788 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.788 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:55.788 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:55.788 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:56.047 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:09:56.047 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:56.047 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:56.047 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:56.047 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:56.047 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:56.047 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:56.047 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.047 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.047 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.047 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:56.047 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:56.047 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:56.306 00:09:56.306 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:56.306 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:56.306 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:56.565 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:56.565 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:56.565 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.565 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.565 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.565 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:56.565 { 00:09:56.565 "cntlid": 13, 00:09:56.565 "qid": 0, 00:09:56.565 "state": "enabled", 00:09:56.565 "thread": "nvmf_tgt_poll_group_000", 00:09:56.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:09:56.565 "listen_address": { 00:09:56.565 "trtype": "TCP", 00:09:56.565 "adrfam": "IPv4", 00:09:56.565 "traddr": "10.0.0.3", 00:09:56.565 "trsvcid": "4420" 00:09:56.565 }, 00:09:56.565 "peer_address": { 00:09:56.565 "trtype": "TCP", 00:09:56.565 "adrfam": "IPv4", 00:09:56.565 "traddr": "10.0.0.1", 00:09:56.565 "trsvcid": "38022" 00:09:56.565 }, 00:09:56.565 "auth": { 00:09:56.565 "state": "completed", 00:09:56.565 "digest": "sha256", 00:09:56.565 "dhgroup": "ffdhe2048" 00:09:56.565 } 00:09:56.565 } 00:09:56.565 ]' 00:09:56.565 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:56.565 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:56.565 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:56.565 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:56.565 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:56.824 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:56.824 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:56.824 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:57.082 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:09:57.082 16:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:09:57.649 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:57.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:57.649 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:09:57.649 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.649 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.649 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.649 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:57.649 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:57.649 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:57.907 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:09:57.907 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:57.907 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:57.907 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:57.907 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:57.907 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:57.908 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key3 00:09:57.908 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.908 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.908 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.908 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:57.908 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:57.908 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:58.166 00:09:58.424 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:58.424 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:58.424 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:58.682 16:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:58.682 16:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:58.683 16:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.683 16:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.683 16:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.683 16:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:58.683 { 00:09:58.683 "cntlid": 15, 00:09:58.683 "qid": 0, 00:09:58.683 "state": "enabled", 00:09:58.683 "thread": "nvmf_tgt_poll_group_000", 00:09:58.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:09:58.683 "listen_address": { 00:09:58.683 "trtype": "TCP", 00:09:58.683 "adrfam": "IPv4", 00:09:58.683 "traddr": "10.0.0.3", 00:09:58.683 "trsvcid": "4420" 00:09:58.683 }, 00:09:58.683 "peer_address": { 00:09:58.683 "trtype": "TCP", 00:09:58.683 "adrfam": "IPv4", 00:09:58.683 "traddr": "10.0.0.1", 00:09:58.683 "trsvcid": "38036" 00:09:58.683 }, 00:09:58.683 "auth": { 00:09:58.683 "state": "completed", 00:09:58.683 "digest": "sha256", 00:09:58.683 "dhgroup": "ffdhe2048" 00:09:58.683 } 00:09:58.683 } 00:09:58.683 ]' 00:09:58.683 16:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:58.683 16:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:58.683 16:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:58.683 16:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:58.683 16:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:58.683 16:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:58.683 16:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:58.683 16:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:59.250 16:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:09:59.250 16:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:09:59.817 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:59.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:59.817 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:09:59.817 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.817 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.817 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.817 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:59.817 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:59.817 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:59.817 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:00.075 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:10:00.076 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:00.076 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:00.076 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:00.076 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:00.076 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:00.076 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:00.076 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.076 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.076 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.076 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:00.076 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:00.076 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:00.334 00:10:00.334 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:00.334 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:00.334 16:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:00.628 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:00.628 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:00.628 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.628 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.885 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.885 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:00.885 { 00:10:00.885 "cntlid": 17, 00:10:00.885 "qid": 0, 00:10:00.885 "state": "enabled", 00:10:00.885 "thread": "nvmf_tgt_poll_group_000", 00:10:00.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:00.885 "listen_address": { 00:10:00.885 "trtype": "TCP", 00:10:00.885 "adrfam": "IPv4", 00:10:00.885 "traddr": "10.0.0.3", 00:10:00.885 "trsvcid": "4420" 00:10:00.885 }, 00:10:00.885 "peer_address": { 00:10:00.885 "trtype": "TCP", 00:10:00.885 "adrfam": "IPv4", 00:10:00.885 "traddr": "10.0.0.1", 00:10:00.885 "trsvcid": "57978" 00:10:00.885 }, 00:10:00.885 "auth": { 00:10:00.885 "state": "completed", 00:10:00.885 "digest": "sha256", 00:10:00.885 "dhgroup": "ffdhe3072" 00:10:00.885 } 00:10:00.885 } 00:10:00.885 ]' 00:10:00.885 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:00.885 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:00.885 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:00.885 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:00.885 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:00.885 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:00.885 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:00.885 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:01.143 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:10:01.143 16:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:10:02.078 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:02.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:02.078 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:02.078 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.078 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.078 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.078 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:02.078 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:02.078 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:02.078 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:10:02.078 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:02.078 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:02.078 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:02.078 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:02.078 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:02.078 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:02.078 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.078 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.078 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.078 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:02.078 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:02.078 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:02.644 00:10:02.644 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:02.644 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:02.644 16:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:02.903 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:02.903 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:02.903 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.903 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.903 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.903 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:02.903 { 00:10:02.903 "cntlid": 19, 00:10:02.903 "qid": 0, 00:10:02.903 "state": "enabled", 00:10:02.903 "thread": "nvmf_tgt_poll_group_000", 00:10:02.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:02.903 "listen_address": { 00:10:02.903 "trtype": "TCP", 00:10:02.903 "adrfam": "IPv4", 00:10:02.903 "traddr": "10.0.0.3", 00:10:02.903 "trsvcid": "4420" 00:10:02.903 }, 00:10:02.903 "peer_address": { 00:10:02.903 "trtype": "TCP", 00:10:02.903 "adrfam": "IPv4", 00:10:02.903 "traddr": "10.0.0.1", 00:10:02.903 "trsvcid": "58016" 00:10:02.903 }, 00:10:02.903 "auth": { 00:10:02.903 "state": "completed", 00:10:02.903 "digest": "sha256", 00:10:02.903 "dhgroup": "ffdhe3072" 00:10:02.903 } 00:10:02.903 } 00:10:02.903 ]' 00:10:02.903 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:02.903 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:02.903 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:02.903 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:02.903 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:03.162 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:03.162 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:03.162 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:03.420 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:10:03.420 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:10:03.988 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:03.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:03.988 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:03.988 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.988 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.988 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.988 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:03.988 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:03.988 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:04.246 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:10:04.246 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:04.246 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:04.246 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:04.246 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:04.246 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:04.246 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:04.246 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.246 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.246 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.246 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:04.246 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:04.246 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:04.813 00:10:04.813 16:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:04.813 16:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:04.813 16:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:05.071 16:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:05.071 16:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:05.071 16:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.071 16:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.071 16:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.071 16:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:05.071 { 00:10:05.071 "cntlid": 21, 00:10:05.071 "qid": 0, 00:10:05.071 "state": "enabled", 00:10:05.071 "thread": "nvmf_tgt_poll_group_000", 00:10:05.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:05.071 "listen_address": { 00:10:05.071 "trtype": "TCP", 00:10:05.071 "adrfam": "IPv4", 00:10:05.071 "traddr": "10.0.0.3", 00:10:05.071 "trsvcid": "4420" 00:10:05.071 }, 00:10:05.071 "peer_address": { 00:10:05.071 "trtype": "TCP", 00:10:05.071 "adrfam": "IPv4", 00:10:05.071 "traddr": "10.0.0.1", 00:10:05.071 "trsvcid": "58052" 00:10:05.071 }, 00:10:05.071 "auth": { 00:10:05.071 "state": "completed", 00:10:05.071 "digest": "sha256", 00:10:05.071 "dhgroup": "ffdhe3072" 00:10:05.071 } 00:10:05.071 } 00:10:05.071 ]' 00:10:05.071 16:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:05.071 16:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:05.071 16:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:05.071 16:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:05.071 16:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:05.071 16:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:05.071 16:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:05.071 16:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:05.637 16:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:10:05.637 16:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:10:06.204 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:06.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:06.204 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:06.204 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.204 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.204 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.204 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:06.204 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:06.204 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:06.461 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:10:06.461 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:06.461 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:06.461 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:06.461 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:06.461 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:06.461 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key3 00:10:06.461 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.461 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.461 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.461 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:06.461 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:06.461 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:06.720 00:10:06.978 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:06.978 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:06.978 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:07.236 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:07.236 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:07.236 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.236 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.236 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.236 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:07.236 { 00:10:07.236 "cntlid": 23, 00:10:07.236 "qid": 0, 00:10:07.236 "state": "enabled", 00:10:07.236 "thread": "nvmf_tgt_poll_group_000", 00:10:07.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:07.236 "listen_address": { 00:10:07.236 "trtype": "TCP", 00:10:07.236 "adrfam": "IPv4", 00:10:07.236 "traddr": "10.0.0.3", 00:10:07.236 "trsvcid": "4420" 00:10:07.236 }, 00:10:07.236 "peer_address": { 00:10:07.236 "trtype": "TCP", 00:10:07.236 "adrfam": "IPv4", 00:10:07.236 "traddr": "10.0.0.1", 00:10:07.236 "trsvcid": "58074" 00:10:07.236 }, 00:10:07.236 "auth": { 00:10:07.236 "state": "completed", 00:10:07.236 "digest": "sha256", 00:10:07.236 "dhgroup": "ffdhe3072" 00:10:07.236 } 00:10:07.236 } 00:10:07.236 ]' 00:10:07.236 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:07.236 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:07.236 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:07.236 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:07.236 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:07.236 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:07.236 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:07.236 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:07.804 16:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:10:07.804 16:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:10:08.371 16:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:08.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:08.371 16:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:08.371 16:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.371 16:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.371 16:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.371 16:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:08.371 16:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:08.371 16:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:08.371 16:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:08.631 16:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:10:08.631 16:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:08.631 16:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:08.631 16:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:08.631 16:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:08.631 16:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:08.631 16:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:08.631 16:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.631 16:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.631 16:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.631 16:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:08.631 16:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:08.631 16:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:08.890 00:10:09.149 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:09.149 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:09.149 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:09.408 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:09.408 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:09.408 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.408 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.408 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.408 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:09.408 { 00:10:09.408 "cntlid": 25, 00:10:09.408 "qid": 0, 00:10:09.408 "state": "enabled", 00:10:09.408 "thread": "nvmf_tgt_poll_group_000", 00:10:09.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:09.408 "listen_address": { 00:10:09.408 "trtype": "TCP", 00:10:09.408 "adrfam": "IPv4", 00:10:09.408 "traddr": "10.0.0.3", 00:10:09.408 "trsvcid": "4420" 00:10:09.408 }, 00:10:09.408 "peer_address": { 00:10:09.408 "trtype": "TCP", 00:10:09.408 "adrfam": "IPv4", 00:10:09.408 "traddr": "10.0.0.1", 00:10:09.408 "trsvcid": "58102" 00:10:09.408 }, 00:10:09.408 "auth": { 00:10:09.408 "state": "completed", 00:10:09.408 "digest": "sha256", 00:10:09.408 "dhgroup": "ffdhe4096" 00:10:09.408 } 00:10:09.408 } 00:10:09.408 ]' 00:10:09.408 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:09.408 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:09.408 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:09.408 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:09.408 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:09.408 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:09.408 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:09.408 16:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:09.976 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:10:09.976 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:10:10.544 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:10.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:10.544 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:10.544 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.544 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.544 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.544 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:10.544 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:10.544 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:10.803 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:10:10.803 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:10.803 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:10.803 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:10.803 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:10.803 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:10.803 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:10.803 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.803 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.803 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.803 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:10.803 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:10.803 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:11.371 00:10:11.371 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:11.371 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:11.371 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:11.629 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:11.629 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:11.629 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.629 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.629 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.629 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:11.629 { 00:10:11.629 "cntlid": 27, 00:10:11.629 "qid": 0, 00:10:11.629 "state": "enabled", 00:10:11.629 "thread": "nvmf_tgt_poll_group_000", 00:10:11.629 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:11.629 "listen_address": { 00:10:11.629 "trtype": "TCP", 00:10:11.629 "adrfam": "IPv4", 00:10:11.629 "traddr": "10.0.0.3", 00:10:11.629 "trsvcid": "4420" 00:10:11.629 }, 00:10:11.629 "peer_address": { 00:10:11.629 "trtype": "TCP", 00:10:11.629 "adrfam": "IPv4", 00:10:11.629 "traddr": "10.0.0.1", 00:10:11.629 "trsvcid": "54174" 00:10:11.629 }, 00:10:11.629 "auth": { 00:10:11.629 "state": "completed", 00:10:11.629 "digest": "sha256", 00:10:11.629 "dhgroup": "ffdhe4096" 00:10:11.629 } 00:10:11.629 } 00:10:11.629 ]' 00:10:11.629 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:11.629 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:11.629 16:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:11.629 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:11.629 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:11.629 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:11.629 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:11.630 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:11.889 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:10:11.889 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:10:12.456 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:12.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:12.456 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:12.456 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.456 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.456 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.457 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:12.457 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:12.457 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:13.023 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:10:13.023 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:13.023 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:13.023 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:13.023 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:13.023 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:13.023 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.023 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.023 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.023 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.023 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.023 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.023 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.282 00:10:13.282 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:13.282 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:13.282 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:13.543 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:13.543 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:13.543 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.543 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.543 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.543 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:13.543 { 00:10:13.543 "cntlid": 29, 00:10:13.543 "qid": 0, 00:10:13.543 "state": "enabled", 00:10:13.543 "thread": "nvmf_tgt_poll_group_000", 00:10:13.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:13.543 "listen_address": { 00:10:13.543 "trtype": "TCP", 00:10:13.543 "adrfam": "IPv4", 00:10:13.543 "traddr": "10.0.0.3", 00:10:13.543 "trsvcid": "4420" 00:10:13.543 }, 00:10:13.543 "peer_address": { 00:10:13.543 "trtype": "TCP", 00:10:13.543 "adrfam": "IPv4", 00:10:13.543 "traddr": "10.0.0.1", 00:10:13.543 "trsvcid": "54204" 00:10:13.543 }, 00:10:13.543 "auth": { 00:10:13.543 "state": "completed", 00:10:13.543 "digest": "sha256", 00:10:13.543 "dhgroup": "ffdhe4096" 00:10:13.543 } 00:10:13.543 } 00:10:13.543 ]' 00:10:13.543 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:13.543 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:13.543 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:13.543 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:13.543 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:13.808 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:13.808 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:13.808 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:14.067 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:10:14.067 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:10:14.635 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:14.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:14.635 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:14.635 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.635 16:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.635 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.635 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:14.635 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:14.635 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:14.893 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:10:14.893 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:14.893 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:14.893 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:14.893 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:14.893 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:14.893 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key3 00:10:14.893 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.893 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.893 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.893 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:14.893 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:14.893 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:15.152 00:10:15.411 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:15.411 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:15.411 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:15.671 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:15.671 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:15.671 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.671 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.671 16:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.671 16:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:15.671 { 00:10:15.671 "cntlid": 31, 00:10:15.671 "qid": 0, 00:10:15.671 "state": "enabled", 00:10:15.671 "thread": "nvmf_tgt_poll_group_000", 00:10:15.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:15.671 "listen_address": { 00:10:15.671 "trtype": "TCP", 00:10:15.671 "adrfam": "IPv4", 00:10:15.671 "traddr": "10.0.0.3", 00:10:15.671 "trsvcid": "4420" 00:10:15.671 }, 00:10:15.671 "peer_address": { 00:10:15.671 "trtype": "TCP", 00:10:15.671 "adrfam": "IPv4", 00:10:15.671 "traddr": "10.0.0.1", 00:10:15.671 "trsvcid": "54240" 00:10:15.671 }, 00:10:15.671 "auth": { 00:10:15.671 "state": "completed", 00:10:15.671 "digest": "sha256", 00:10:15.671 "dhgroup": "ffdhe4096" 00:10:15.671 } 00:10:15.671 } 00:10:15.671 ]' 00:10:15.671 16:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:15.671 16:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:15.671 16:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:15.671 16:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:15.671 16:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:15.671 16:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:15.671 16:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:15.671 16:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:15.930 16:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:10:15.930 16:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:10:16.498 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:16.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:16.757 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:16.757 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.757 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.757 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.757 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:16.757 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:16.757 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:16.757 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:16.757 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:10:16.757 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:16.757 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:16.757 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:16.757 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:16.757 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:16.757 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:16.757 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.757 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.016 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.016 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.016 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.016 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.275 00:10:17.533 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:17.533 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:17.533 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:17.792 16:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:17.792 16:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:17.792 16:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.792 16:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.792 16:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.792 16:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:17.792 { 00:10:17.792 "cntlid": 33, 00:10:17.792 "qid": 0, 00:10:17.792 "state": "enabled", 00:10:17.792 "thread": "nvmf_tgt_poll_group_000", 00:10:17.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:17.792 "listen_address": { 00:10:17.792 "trtype": "TCP", 00:10:17.792 "adrfam": "IPv4", 00:10:17.792 "traddr": "10.0.0.3", 00:10:17.792 "trsvcid": "4420" 00:10:17.792 }, 00:10:17.792 "peer_address": { 00:10:17.792 "trtype": "TCP", 00:10:17.792 "adrfam": "IPv4", 00:10:17.792 "traddr": "10.0.0.1", 00:10:17.792 "trsvcid": "54264" 00:10:17.792 }, 00:10:17.792 "auth": { 00:10:17.792 "state": "completed", 00:10:17.792 "digest": "sha256", 00:10:17.792 "dhgroup": "ffdhe6144" 00:10:17.792 } 00:10:17.792 } 00:10:17.792 ]' 00:10:17.792 16:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:17.792 16:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:17.792 16:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:17.792 16:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:17.792 16:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:17.792 16:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:17.792 16:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:17.793 16:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:18.359 16:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:10:18.359 16:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:10:18.928 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:18.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:18.928 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:18.928 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.928 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.928 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.928 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:18.928 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:18.929 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:19.188 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:10:19.188 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:19.188 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:19.188 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:19.188 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:19.188 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:19.188 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:19.188 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.188 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.188 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.188 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:19.188 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:19.188 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:19.755 00:10:19.755 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:19.755 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:19.755 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:20.014 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:20.014 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:20.014 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.014 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.014 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.014 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:20.014 { 00:10:20.014 "cntlid": 35, 00:10:20.014 "qid": 0, 00:10:20.014 "state": "enabled", 00:10:20.014 "thread": "nvmf_tgt_poll_group_000", 00:10:20.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:20.014 "listen_address": { 00:10:20.014 "trtype": "TCP", 00:10:20.014 "adrfam": "IPv4", 00:10:20.014 "traddr": "10.0.0.3", 00:10:20.014 "trsvcid": "4420" 00:10:20.014 }, 00:10:20.014 "peer_address": { 00:10:20.014 "trtype": "TCP", 00:10:20.014 "adrfam": "IPv4", 00:10:20.014 "traddr": "10.0.0.1", 00:10:20.014 "trsvcid": "54304" 00:10:20.014 }, 00:10:20.014 "auth": { 00:10:20.014 "state": "completed", 00:10:20.014 "digest": "sha256", 00:10:20.014 "dhgroup": "ffdhe6144" 00:10:20.014 } 00:10:20.014 } 00:10:20.014 ]' 00:10:20.014 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:20.014 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:20.014 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:20.014 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:20.014 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:20.014 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:20.014 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:20.014 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:20.273 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:10:20.273 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:10:20.840 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:20.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:20.840 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:20.840 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.840 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.840 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.840 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:20.840 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:20.840 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:21.406 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:10:21.406 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:21.406 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:21.406 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:21.406 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:21.406 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:21.406 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.406 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.406 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.406 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.406 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.406 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.406 16:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.664 00:10:21.664 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:21.664 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:21.664 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:21.923 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:21.923 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:21.923 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.923 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.183 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.183 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:22.183 { 00:10:22.183 "cntlid": 37, 00:10:22.183 "qid": 0, 00:10:22.183 "state": "enabled", 00:10:22.183 "thread": "nvmf_tgt_poll_group_000", 00:10:22.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:22.183 "listen_address": { 00:10:22.183 "trtype": "TCP", 00:10:22.183 "adrfam": "IPv4", 00:10:22.183 "traddr": "10.0.0.3", 00:10:22.183 "trsvcid": "4420" 00:10:22.183 }, 00:10:22.183 "peer_address": { 00:10:22.183 "trtype": "TCP", 00:10:22.183 "adrfam": "IPv4", 00:10:22.183 "traddr": "10.0.0.1", 00:10:22.183 "trsvcid": "57536" 00:10:22.183 }, 00:10:22.183 "auth": { 00:10:22.183 "state": "completed", 00:10:22.183 "digest": "sha256", 00:10:22.183 "dhgroup": "ffdhe6144" 00:10:22.183 } 00:10:22.183 } 00:10:22.183 ]' 00:10:22.183 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:22.183 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:22.183 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:22.184 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:22.184 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:22.184 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:22.184 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:22.184 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:22.443 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:10:22.443 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:10:23.379 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:23.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:23.379 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:23.379 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.379 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.379 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.379 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:23.379 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:23.379 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:23.638 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:10:23.638 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:23.638 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:23.638 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:23.638 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:23.638 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:23.638 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key3 00:10:23.638 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.638 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.638 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.638 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:23.638 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:23.638 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:24.300 00:10:24.300 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:24.300 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:24.300 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:24.559 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:24.559 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:24.559 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.559 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.559 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.559 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:24.559 { 00:10:24.559 "cntlid": 39, 00:10:24.559 "qid": 0, 00:10:24.559 "state": "enabled", 00:10:24.559 "thread": "nvmf_tgt_poll_group_000", 00:10:24.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:24.559 "listen_address": { 00:10:24.559 "trtype": "TCP", 00:10:24.559 "adrfam": "IPv4", 00:10:24.559 "traddr": "10.0.0.3", 00:10:24.559 "trsvcid": "4420" 00:10:24.559 }, 00:10:24.559 "peer_address": { 00:10:24.559 "trtype": "TCP", 00:10:24.559 "adrfam": "IPv4", 00:10:24.559 "traddr": "10.0.0.1", 00:10:24.559 "trsvcid": "57558" 00:10:24.559 }, 00:10:24.559 "auth": { 00:10:24.559 "state": "completed", 00:10:24.559 "digest": "sha256", 00:10:24.559 "dhgroup": "ffdhe6144" 00:10:24.559 } 00:10:24.559 } 00:10:24.559 ]' 00:10:24.559 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:24.559 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:24.559 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:24.559 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:24.559 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:24.559 16:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:24.559 16:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:24.559 16:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:24.817 16:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:10:24.817 16:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:10:25.754 16:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:25.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:25.754 16:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:25.754 16:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.754 16:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.754 16:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.754 16:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:25.754 16:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:25.754 16:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:25.754 16:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:25.754 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:10:25.754 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:25.754 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:25.754 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:25.754 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:25.754 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:25.754 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:25.754 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.754 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.754 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.754 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:25.754 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:25.754 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:26.692 00:10:26.692 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:26.692 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:26.692 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:26.951 16:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:26.951 16:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:26.951 16:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.951 16:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.951 16:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.951 16:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:26.951 { 00:10:26.951 "cntlid": 41, 00:10:26.951 "qid": 0, 00:10:26.951 "state": "enabled", 00:10:26.951 "thread": "nvmf_tgt_poll_group_000", 00:10:26.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:26.951 "listen_address": { 00:10:26.951 "trtype": "TCP", 00:10:26.951 "adrfam": "IPv4", 00:10:26.951 "traddr": "10.0.0.3", 00:10:26.951 "trsvcid": "4420" 00:10:26.951 }, 00:10:26.951 "peer_address": { 00:10:26.951 "trtype": "TCP", 00:10:26.951 "adrfam": "IPv4", 00:10:26.951 "traddr": "10.0.0.1", 00:10:26.951 "trsvcid": "57576" 00:10:26.951 }, 00:10:26.951 "auth": { 00:10:26.951 "state": "completed", 00:10:26.951 "digest": "sha256", 00:10:26.951 "dhgroup": "ffdhe8192" 00:10:26.951 } 00:10:26.951 } 00:10:26.951 ]' 00:10:26.951 16:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:26.951 16:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:26.951 16:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:26.951 16:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:26.951 16:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:26.951 16:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:26.951 16:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:26.951 16:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:27.518 16:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:10:27.518 16:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:10:28.086 16:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:28.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:28.086 16:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:28.086 16:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.086 16:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.086 16:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.086 16:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:28.086 16:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:28.086 16:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:28.345 16:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:10:28.345 16:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:28.345 16:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:28.345 16:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:28.345 16:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:28.345 16:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:28.345 16:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:28.345 16:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.345 16:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.345 16:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.345 16:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:28.345 16:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:28.345 16:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:29.281 00:10:29.281 16:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:29.281 16:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:29.281 16:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:29.540 16:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:29.540 16:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:29.540 16:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.540 16:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.540 16:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.540 16:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:29.540 { 00:10:29.540 "cntlid": 43, 00:10:29.540 "qid": 0, 00:10:29.540 "state": "enabled", 00:10:29.540 "thread": "nvmf_tgt_poll_group_000", 00:10:29.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:29.540 "listen_address": { 00:10:29.540 "trtype": "TCP", 00:10:29.540 "adrfam": "IPv4", 00:10:29.540 "traddr": "10.0.0.3", 00:10:29.540 "trsvcid": "4420" 00:10:29.540 }, 00:10:29.540 "peer_address": { 00:10:29.540 "trtype": "TCP", 00:10:29.540 "adrfam": "IPv4", 00:10:29.540 "traddr": "10.0.0.1", 00:10:29.540 "trsvcid": "57592" 00:10:29.540 }, 00:10:29.540 "auth": { 00:10:29.540 "state": "completed", 00:10:29.540 "digest": "sha256", 00:10:29.540 "dhgroup": "ffdhe8192" 00:10:29.540 } 00:10:29.540 } 00:10:29.540 ]' 00:10:29.540 16:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:29.540 16:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:29.540 16:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:29.540 16:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:29.540 16:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:29.540 16:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:29.540 16:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:29.540 16:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:30.108 16:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:10:30.108 16:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:10:30.676 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:30.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:30.676 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:30.676 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.676 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.676 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.676 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:30.676 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:30.677 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:30.935 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:10:30.935 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:30.935 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:30.935 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:30.935 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:30.935 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:30.935 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:30.935 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.935 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.935 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.935 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:30.935 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:30.935 16:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:31.872 00:10:31.872 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:31.872 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:31.872 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:31.872 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:31.872 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:31.872 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.872 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.872 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.872 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:31.872 { 00:10:31.872 "cntlid": 45, 00:10:31.872 "qid": 0, 00:10:31.872 "state": "enabled", 00:10:31.872 "thread": "nvmf_tgt_poll_group_000", 00:10:31.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:31.872 "listen_address": { 00:10:31.872 "trtype": "TCP", 00:10:31.872 "adrfam": "IPv4", 00:10:31.872 "traddr": "10.0.0.3", 00:10:31.872 "trsvcid": "4420" 00:10:31.872 }, 00:10:31.872 "peer_address": { 00:10:31.872 "trtype": "TCP", 00:10:31.872 "adrfam": "IPv4", 00:10:31.872 "traddr": "10.0.0.1", 00:10:31.872 "trsvcid": "42052" 00:10:31.872 }, 00:10:31.872 "auth": { 00:10:31.872 "state": "completed", 00:10:31.872 "digest": "sha256", 00:10:31.872 "dhgroup": "ffdhe8192" 00:10:31.872 } 00:10:31.872 } 00:10:31.872 ]' 00:10:31.872 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:31.872 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:31.872 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:32.131 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:32.131 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:32.131 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.131 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.131 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:32.390 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:10:32.390 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:10:32.956 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:32.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:32.956 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:32.956 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.956 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.956 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.956 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:32.956 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:32.956 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:33.553 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:10:33.553 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:33.553 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:33.553 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:33.553 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:33.553 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.553 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key3 00:10:33.554 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.554 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.554 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.554 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:33.554 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:33.554 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:34.119 00:10:34.119 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:34.119 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:34.119 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.379 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.379 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.379 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.379 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.379 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.379 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:34.379 { 00:10:34.380 "cntlid": 47, 00:10:34.380 "qid": 0, 00:10:34.380 "state": "enabled", 00:10:34.380 "thread": "nvmf_tgt_poll_group_000", 00:10:34.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:34.380 "listen_address": { 00:10:34.380 "trtype": "TCP", 00:10:34.380 "adrfam": "IPv4", 00:10:34.380 "traddr": "10.0.0.3", 00:10:34.380 "trsvcid": "4420" 00:10:34.380 }, 00:10:34.380 "peer_address": { 00:10:34.380 "trtype": "TCP", 00:10:34.380 "adrfam": "IPv4", 00:10:34.380 "traddr": "10.0.0.1", 00:10:34.380 "trsvcid": "42070" 00:10:34.380 }, 00:10:34.380 "auth": { 00:10:34.380 "state": "completed", 00:10:34.380 "digest": "sha256", 00:10:34.380 "dhgroup": "ffdhe8192" 00:10:34.380 } 00:10:34.380 } 00:10:34.380 ]' 00:10:34.380 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:34.380 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:34.380 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:34.380 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:34.380 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:34.380 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:34.380 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:34.380 16:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:34.951 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:10:34.951 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:10:35.519 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.519 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:35.519 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.519 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.519 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.519 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:35.519 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:35.519 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:35.519 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:35.519 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:35.778 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:10:35.778 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:35.778 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:35.778 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:35.778 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:35.778 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:35.778 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:35.778 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.778 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.778 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.778 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:35.778 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:35.778 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.037 00:10:36.037 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:36.037 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:36.037 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:36.606 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:36.606 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:36.606 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.606 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.606 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.606 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:36.606 { 00:10:36.606 "cntlid": 49, 00:10:36.606 "qid": 0, 00:10:36.606 "state": "enabled", 00:10:36.606 "thread": "nvmf_tgt_poll_group_000", 00:10:36.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:36.606 "listen_address": { 00:10:36.606 "trtype": "TCP", 00:10:36.606 "adrfam": "IPv4", 00:10:36.606 "traddr": "10.0.0.3", 00:10:36.606 "trsvcid": "4420" 00:10:36.606 }, 00:10:36.606 "peer_address": { 00:10:36.606 "trtype": "TCP", 00:10:36.606 "adrfam": "IPv4", 00:10:36.606 "traddr": "10.0.0.1", 00:10:36.606 "trsvcid": "42094" 00:10:36.606 }, 00:10:36.606 "auth": { 00:10:36.606 "state": "completed", 00:10:36.606 "digest": "sha384", 00:10:36.606 "dhgroup": "null" 00:10:36.606 } 00:10:36.606 } 00:10:36.606 ]' 00:10:36.606 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:36.606 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:36.606 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:36.606 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:36.606 16:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:36.606 16:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:36.606 16:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:36.606 16:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:36.865 16:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:10:36.865 16:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:10:37.801 16:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:37.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:37.801 16:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:37.801 16:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.801 16:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.801 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.801 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:37.801 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:37.801 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:37.801 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:10:37.801 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:37.801 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:37.801 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:37.801 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:37.801 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:37.801 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:37.801 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.801 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.801 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.801 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:37.801 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:37.802 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.370 00:10:38.370 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:38.370 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:38.370 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:38.629 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:38.630 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:38.630 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.630 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.630 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.630 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:38.630 { 00:10:38.630 "cntlid": 51, 00:10:38.630 "qid": 0, 00:10:38.630 "state": "enabled", 00:10:38.630 "thread": "nvmf_tgt_poll_group_000", 00:10:38.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:38.630 "listen_address": { 00:10:38.630 "trtype": "TCP", 00:10:38.630 "adrfam": "IPv4", 00:10:38.630 "traddr": "10.0.0.3", 00:10:38.630 "trsvcid": "4420" 00:10:38.630 }, 00:10:38.630 "peer_address": { 00:10:38.630 "trtype": "TCP", 00:10:38.630 "adrfam": "IPv4", 00:10:38.630 "traddr": "10.0.0.1", 00:10:38.630 "trsvcid": "42110" 00:10:38.630 }, 00:10:38.630 "auth": { 00:10:38.630 "state": "completed", 00:10:38.630 "digest": "sha384", 00:10:38.630 "dhgroup": "null" 00:10:38.630 } 00:10:38.630 } 00:10:38.630 ]' 00:10:38.630 16:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:38.630 16:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:38.630 16:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:38.630 16:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:38.630 16:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:38.630 16:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:38.630 16:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:38.630 16:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:39.196 16:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:10:39.196 16:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:10:39.763 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:39.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:39.763 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:39.763 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.763 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.763 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.763 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:39.763 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:39.763 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:40.021 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:10:40.021 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:40.021 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:40.021 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:40.021 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:40.021 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.021 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.021 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.021 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.021 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.021 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.021 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.021 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.279 00:10:40.279 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:40.279 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.279 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:40.845 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:40.845 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:40.845 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.845 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.845 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.845 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:40.845 { 00:10:40.845 "cntlid": 53, 00:10:40.845 "qid": 0, 00:10:40.845 "state": "enabled", 00:10:40.845 "thread": "nvmf_tgt_poll_group_000", 00:10:40.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:40.845 "listen_address": { 00:10:40.845 "trtype": "TCP", 00:10:40.845 "adrfam": "IPv4", 00:10:40.845 "traddr": "10.0.0.3", 00:10:40.845 "trsvcid": "4420" 00:10:40.845 }, 00:10:40.845 "peer_address": { 00:10:40.845 "trtype": "TCP", 00:10:40.845 "adrfam": "IPv4", 00:10:40.845 "traddr": "10.0.0.1", 00:10:40.845 "trsvcid": "54308" 00:10:40.845 }, 00:10:40.845 "auth": { 00:10:40.845 "state": "completed", 00:10:40.845 "digest": "sha384", 00:10:40.845 "dhgroup": "null" 00:10:40.845 } 00:10:40.845 } 00:10:40.845 ]' 00:10:40.845 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:40.845 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:40.845 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:40.845 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:40.845 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:40.845 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:40.845 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:40.845 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:41.104 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:10:41.104 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:10:41.672 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:41.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:41.931 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:41.931 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.931 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.931 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.931 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:41.931 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:41.931 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:42.190 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:10:42.190 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:42.190 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:42.190 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:42.190 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:42.190 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.190 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key3 00:10:42.190 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.190 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.190 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.190 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:42.190 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:42.190 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:42.449 00:10:42.449 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:42.449 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:42.449 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:42.708 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:42.708 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:42.708 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.708 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.708 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.708 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:42.708 { 00:10:42.708 "cntlid": 55, 00:10:42.708 "qid": 0, 00:10:42.708 "state": "enabled", 00:10:42.708 "thread": "nvmf_tgt_poll_group_000", 00:10:42.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:42.708 "listen_address": { 00:10:42.708 "trtype": "TCP", 00:10:42.708 "adrfam": "IPv4", 00:10:42.708 "traddr": "10.0.0.3", 00:10:42.708 "trsvcid": "4420" 00:10:42.708 }, 00:10:42.708 "peer_address": { 00:10:42.708 "trtype": "TCP", 00:10:42.708 "adrfam": "IPv4", 00:10:42.708 "traddr": "10.0.0.1", 00:10:42.708 "trsvcid": "54330" 00:10:42.708 }, 00:10:42.708 "auth": { 00:10:42.708 "state": "completed", 00:10:42.708 "digest": "sha384", 00:10:42.708 "dhgroup": "null" 00:10:42.708 } 00:10:42.708 } 00:10:42.708 ]' 00:10:42.708 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:42.966 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:42.966 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:42.967 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:42.967 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:42.967 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:42.967 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:42.967 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.225 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:10:43.225 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:10:43.794 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.794 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:43.794 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.794 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.794 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.794 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:43.794 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:43.794 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:43.794 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:44.054 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:10:44.054 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:44.054 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:44.054 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:44.054 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:44.054 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.054 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.054 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.054 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.054 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.054 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.054 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.054 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.622 00:10:44.622 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:44.622 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.622 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:44.881 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.881 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.881 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.881 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.881 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.881 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:44.881 { 00:10:44.881 "cntlid": 57, 00:10:44.881 "qid": 0, 00:10:44.881 "state": "enabled", 00:10:44.881 "thread": "nvmf_tgt_poll_group_000", 00:10:44.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:44.881 "listen_address": { 00:10:44.881 "trtype": "TCP", 00:10:44.881 "adrfam": "IPv4", 00:10:44.881 "traddr": "10.0.0.3", 00:10:44.881 "trsvcid": "4420" 00:10:44.881 }, 00:10:44.881 "peer_address": { 00:10:44.881 "trtype": "TCP", 00:10:44.881 "adrfam": "IPv4", 00:10:44.882 "traddr": "10.0.0.1", 00:10:44.882 "trsvcid": "54362" 00:10:44.882 }, 00:10:44.882 "auth": { 00:10:44.882 "state": "completed", 00:10:44.882 "digest": "sha384", 00:10:44.882 "dhgroup": "ffdhe2048" 00:10:44.882 } 00:10:44.882 } 00:10:44.882 ]' 00:10:44.882 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:44.882 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:44.882 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:44.882 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:44.882 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:45.141 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.141 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.141 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.400 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:10:45.400 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:10:45.968 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.968 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:45.968 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.968 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.968 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.968 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:45.968 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:45.968 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:46.226 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:10:46.226 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:46.226 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:46.226 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:46.226 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:46.226 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.226 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.226 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.226 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.226 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.227 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.227 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.227 16:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.794 00:10:46.794 16:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:46.794 16:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.794 16:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:47.053 16:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:47.053 16:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:47.053 16:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.053 16:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.053 16:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.053 16:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:47.053 { 00:10:47.053 "cntlid": 59, 00:10:47.053 "qid": 0, 00:10:47.053 "state": "enabled", 00:10:47.053 "thread": "nvmf_tgt_poll_group_000", 00:10:47.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:47.053 "listen_address": { 00:10:47.053 "trtype": "TCP", 00:10:47.053 "adrfam": "IPv4", 00:10:47.053 "traddr": "10.0.0.3", 00:10:47.053 "trsvcid": "4420" 00:10:47.053 }, 00:10:47.053 "peer_address": { 00:10:47.053 "trtype": "TCP", 00:10:47.053 "adrfam": "IPv4", 00:10:47.053 "traddr": "10.0.0.1", 00:10:47.053 "trsvcid": "54382" 00:10:47.053 }, 00:10:47.053 "auth": { 00:10:47.053 "state": "completed", 00:10:47.053 "digest": "sha384", 00:10:47.053 "dhgroup": "ffdhe2048" 00:10:47.053 } 00:10:47.053 } 00:10:47.053 ]' 00:10:47.053 16:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:47.053 16:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:47.053 16:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:47.053 16:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:47.053 16:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:47.053 16:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:47.053 16:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:47.053 16:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.312 16:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:10:47.312 16:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:10:48.249 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.249 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:48.249 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.249 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.249 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.249 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:48.249 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:48.249 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:48.509 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:10:48.509 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:48.509 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:48.509 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:48.509 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:48.509 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.509 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.509 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.509 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.509 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.509 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.509 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.509 16:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.768 00:10:48.768 16:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:48.768 16:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:48.768 16:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.027 16:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.027 16:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.027 16:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.027 16:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.027 16:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.027 16:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:49.027 { 00:10:49.027 "cntlid": 61, 00:10:49.027 "qid": 0, 00:10:49.027 "state": "enabled", 00:10:49.027 "thread": "nvmf_tgt_poll_group_000", 00:10:49.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:49.027 "listen_address": { 00:10:49.027 "trtype": "TCP", 00:10:49.027 "adrfam": "IPv4", 00:10:49.027 "traddr": "10.0.0.3", 00:10:49.027 "trsvcid": "4420" 00:10:49.027 }, 00:10:49.027 "peer_address": { 00:10:49.027 "trtype": "TCP", 00:10:49.027 "adrfam": "IPv4", 00:10:49.028 "traddr": "10.0.0.1", 00:10:49.028 "trsvcid": "54402" 00:10:49.028 }, 00:10:49.028 "auth": { 00:10:49.028 "state": "completed", 00:10:49.028 "digest": "sha384", 00:10:49.028 "dhgroup": "ffdhe2048" 00:10:49.028 } 00:10:49.028 } 00:10:49.028 ]' 00:10:49.028 16:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:49.028 16:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:49.028 16:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:49.287 16:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:49.287 16:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:49.287 16:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.287 16:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.287 16:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:49.547 16:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:10:49.547 16:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:10:50.115 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.115 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:50.115 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.115 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.115 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.115 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:50.115 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:50.115 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:50.684 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:10:50.684 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:50.684 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:50.684 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:50.684 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:50.684 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.684 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key3 00:10:50.684 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.684 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.684 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.684 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:50.684 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:50.684 16:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:50.943 00:10:50.943 16:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:50.943 16:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:50.943 16:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.202 16:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.202 16:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.202 16:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.202 16:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.202 16:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.202 16:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:51.202 { 00:10:51.202 "cntlid": 63, 00:10:51.202 "qid": 0, 00:10:51.202 "state": "enabled", 00:10:51.202 "thread": "nvmf_tgt_poll_group_000", 00:10:51.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:51.202 "listen_address": { 00:10:51.202 "trtype": "TCP", 00:10:51.202 "adrfam": "IPv4", 00:10:51.202 "traddr": "10.0.0.3", 00:10:51.202 "trsvcid": "4420" 00:10:51.202 }, 00:10:51.202 "peer_address": { 00:10:51.202 "trtype": "TCP", 00:10:51.202 "adrfam": "IPv4", 00:10:51.202 "traddr": "10.0.0.1", 00:10:51.202 "trsvcid": "41106" 00:10:51.202 }, 00:10:51.202 "auth": { 00:10:51.202 "state": "completed", 00:10:51.202 "digest": "sha384", 00:10:51.202 "dhgroup": "ffdhe2048" 00:10:51.202 } 00:10:51.202 } 00:10:51.202 ]' 00:10:51.202 16:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:51.202 16:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:51.202 16:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:51.202 16:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:51.202 16:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:51.461 16:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.461 16:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.461 16:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.720 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:10:51.720 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:10:52.287 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.287 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:52.287 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.287 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.287 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.287 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:52.287 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:52.287 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:52.287 16:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:52.855 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:10:52.855 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:52.855 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:52.855 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:52.855 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:52.855 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.855 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.855 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.855 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.855 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.855 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.855 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.855 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.114 00:10:53.114 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:53.114 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:53.114 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.378 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.378 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.378 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.378 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.378 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.378 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:53.378 { 00:10:53.378 "cntlid": 65, 00:10:53.378 "qid": 0, 00:10:53.378 "state": "enabled", 00:10:53.378 "thread": "nvmf_tgt_poll_group_000", 00:10:53.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:53.378 "listen_address": { 00:10:53.378 "trtype": "TCP", 00:10:53.378 "adrfam": "IPv4", 00:10:53.378 "traddr": "10.0.0.3", 00:10:53.378 "trsvcid": "4420" 00:10:53.378 }, 00:10:53.378 "peer_address": { 00:10:53.378 "trtype": "TCP", 00:10:53.378 "adrfam": "IPv4", 00:10:53.378 "traddr": "10.0.0.1", 00:10:53.378 "trsvcid": "41124" 00:10:53.378 }, 00:10:53.378 "auth": { 00:10:53.378 "state": "completed", 00:10:53.378 "digest": "sha384", 00:10:53.378 "dhgroup": "ffdhe3072" 00:10:53.378 } 00:10:53.378 } 00:10:53.378 ]' 00:10:53.378 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:53.378 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:53.378 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:53.378 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:53.378 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:53.636 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.636 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.636 16:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.896 16:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:10:53.897 16:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:10:54.467 16:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.726 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:54.726 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.726 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.726 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.726 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:54.726 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:54.726 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:55.010 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:10:55.010 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:55.010 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:55.010 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:55.010 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:55.010 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.011 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.011 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.011 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.011 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.011 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.011 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.011 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.269 00:10:55.269 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:55.269 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:55.269 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.528 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:55.528 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:55.528 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.528 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.528 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.528 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:55.528 { 00:10:55.528 "cntlid": 67, 00:10:55.528 "qid": 0, 00:10:55.528 "state": "enabled", 00:10:55.528 "thread": "nvmf_tgt_poll_group_000", 00:10:55.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:55.528 "listen_address": { 00:10:55.528 "trtype": "TCP", 00:10:55.528 "adrfam": "IPv4", 00:10:55.528 "traddr": "10.0.0.3", 00:10:55.528 "trsvcid": "4420" 00:10:55.528 }, 00:10:55.528 "peer_address": { 00:10:55.528 "trtype": "TCP", 00:10:55.528 "adrfam": "IPv4", 00:10:55.528 "traddr": "10.0.0.1", 00:10:55.528 "trsvcid": "41150" 00:10:55.528 }, 00:10:55.528 "auth": { 00:10:55.528 "state": "completed", 00:10:55.528 "digest": "sha384", 00:10:55.528 "dhgroup": "ffdhe3072" 00:10:55.528 } 00:10:55.528 } 00:10:55.528 ]' 00:10:55.528 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:55.528 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:55.528 16:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:55.528 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:55.528 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:55.787 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.787 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.787 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:56.046 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:10:56.046 16:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:10:56.984 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.984 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:56.984 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.984 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.984 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.984 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:56.984 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:56.984 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:56.984 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:10:56.984 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:56.984 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:56.984 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:56.984 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:56.984 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.984 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.984 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.984 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.984 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.984 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.984 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.984 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.551 00:10:57.551 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:57.551 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.551 16:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:57.810 16:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.810 16:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.810 16:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.810 16:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.810 16:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.810 16:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:57.810 { 00:10:57.810 "cntlid": 69, 00:10:57.810 "qid": 0, 00:10:57.810 "state": "enabled", 00:10:57.810 "thread": "nvmf_tgt_poll_group_000", 00:10:57.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:10:57.810 "listen_address": { 00:10:57.810 "trtype": "TCP", 00:10:57.810 "adrfam": "IPv4", 00:10:57.810 "traddr": "10.0.0.3", 00:10:57.810 "trsvcid": "4420" 00:10:57.810 }, 00:10:57.810 "peer_address": { 00:10:57.810 "trtype": "TCP", 00:10:57.810 "adrfam": "IPv4", 00:10:57.810 "traddr": "10.0.0.1", 00:10:57.810 "trsvcid": "41176" 00:10:57.810 }, 00:10:57.810 "auth": { 00:10:57.810 "state": "completed", 00:10:57.810 "digest": "sha384", 00:10:57.810 "dhgroup": "ffdhe3072" 00:10:57.810 } 00:10:57.810 } 00:10:57.810 ]' 00:10:57.810 16:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:57.810 16:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:57.810 16:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:57.810 16:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:57.810 16:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:57.810 16:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.810 16:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.810 16:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.378 16:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:10:58.378 16:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:10:58.944 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.944 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:10:58.944 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.944 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.944 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.944 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:58.944 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:58.944 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:59.202 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:10:59.202 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:59.202 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:59.202 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:59.202 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:59.202 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.202 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key3 00:10:59.202 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.202 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.202 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.202 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:59.202 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:59.202 16:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:59.769 00:10:59.769 16:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:59.769 16:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:59.769 16:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.028 16:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.028 16:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.028 16:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.028 16:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.028 16:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.028 16:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:00.028 { 00:11:00.028 "cntlid": 71, 00:11:00.028 "qid": 0, 00:11:00.028 "state": "enabled", 00:11:00.028 "thread": "nvmf_tgt_poll_group_000", 00:11:00.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:00.028 "listen_address": { 00:11:00.028 "trtype": "TCP", 00:11:00.028 "adrfam": "IPv4", 00:11:00.028 "traddr": "10.0.0.3", 00:11:00.028 "trsvcid": "4420" 00:11:00.028 }, 00:11:00.028 "peer_address": { 00:11:00.028 "trtype": "TCP", 00:11:00.028 "adrfam": "IPv4", 00:11:00.028 "traddr": "10.0.0.1", 00:11:00.028 "trsvcid": "50076" 00:11:00.028 }, 00:11:00.028 "auth": { 00:11:00.028 "state": "completed", 00:11:00.028 "digest": "sha384", 00:11:00.028 "dhgroup": "ffdhe3072" 00:11:00.028 } 00:11:00.028 } 00:11:00.028 ]' 00:11:00.028 16:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:00.028 16:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:00.028 16:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:00.287 16:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:00.287 16:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:00.287 16:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.287 16:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.287 16:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:00.545 16:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:11:00.545 16:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:11:01.201 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.201 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:01.201 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.201 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.201 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.201 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:01.201 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:01.201 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:01.201 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:01.460 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:11:01.460 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:01.460 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:01.460 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:01.460 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:01.460 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.460 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.460 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.460 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.460 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.460 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.460 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.460 16:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.719 00:11:01.719 16:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:01.719 16:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.719 16:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:02.288 16:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.288 16:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.288 16:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.288 16:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.288 16:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.288 16:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:02.288 { 00:11:02.288 "cntlid": 73, 00:11:02.288 "qid": 0, 00:11:02.288 "state": "enabled", 00:11:02.288 "thread": "nvmf_tgt_poll_group_000", 00:11:02.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:02.288 "listen_address": { 00:11:02.288 "trtype": "TCP", 00:11:02.289 "adrfam": "IPv4", 00:11:02.289 "traddr": "10.0.0.3", 00:11:02.289 "trsvcid": "4420" 00:11:02.289 }, 00:11:02.289 "peer_address": { 00:11:02.289 "trtype": "TCP", 00:11:02.289 "adrfam": "IPv4", 00:11:02.289 "traddr": "10.0.0.1", 00:11:02.289 "trsvcid": "50100" 00:11:02.289 }, 00:11:02.289 "auth": { 00:11:02.289 "state": "completed", 00:11:02.289 "digest": "sha384", 00:11:02.289 "dhgroup": "ffdhe4096" 00:11:02.289 } 00:11:02.289 } 00:11:02.289 ]' 00:11:02.289 16:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:02.289 16:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:02.289 16:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:02.289 16:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:02.289 16:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:02.289 16:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.289 16:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.289 16:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.548 16:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:11:02.548 16:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:11:03.116 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.117 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:03.117 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.117 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.117 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.117 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:03.117 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:03.117 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:03.685 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:11:03.685 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:03.685 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:03.685 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:03.685 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:03.685 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.685 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.685 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.685 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.685 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.685 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.685 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.685 16:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.944 00:11:03.944 16:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:03.944 16:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.944 16:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:04.203 16:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.203 16:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.203 16:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.203 16:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.203 16:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.203 16:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:04.203 { 00:11:04.203 "cntlid": 75, 00:11:04.203 "qid": 0, 00:11:04.203 "state": "enabled", 00:11:04.203 "thread": "nvmf_tgt_poll_group_000", 00:11:04.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:04.203 "listen_address": { 00:11:04.203 "trtype": "TCP", 00:11:04.203 "adrfam": "IPv4", 00:11:04.203 "traddr": "10.0.0.3", 00:11:04.203 "trsvcid": "4420" 00:11:04.203 }, 00:11:04.203 "peer_address": { 00:11:04.203 "trtype": "TCP", 00:11:04.203 "adrfam": "IPv4", 00:11:04.203 "traddr": "10.0.0.1", 00:11:04.203 "trsvcid": "50130" 00:11:04.203 }, 00:11:04.203 "auth": { 00:11:04.203 "state": "completed", 00:11:04.203 "digest": "sha384", 00:11:04.203 "dhgroup": "ffdhe4096" 00:11:04.203 } 00:11:04.203 } 00:11:04.203 ]' 00:11:04.203 16:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:04.203 16:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:04.203 16:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:04.203 16:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:04.203 16:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:04.463 16:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.463 16:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.463 16:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.722 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:11:04.722 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:11:05.291 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.291 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:05.291 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.291 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.291 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.291 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:05.291 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:05.291 16:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:05.860 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:11:05.860 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:05.860 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:05.860 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:05.860 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:05.860 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.860 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.860 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.860 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.860 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.860 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.860 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.860 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.119 00:11:06.119 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:06.119 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.119 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:06.377 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.377 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.377 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.377 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.377 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.377 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:06.377 { 00:11:06.377 "cntlid": 77, 00:11:06.377 "qid": 0, 00:11:06.377 "state": "enabled", 00:11:06.377 "thread": "nvmf_tgt_poll_group_000", 00:11:06.377 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:06.377 "listen_address": { 00:11:06.377 "trtype": "TCP", 00:11:06.377 "adrfam": "IPv4", 00:11:06.377 "traddr": "10.0.0.3", 00:11:06.377 "trsvcid": "4420" 00:11:06.377 }, 00:11:06.377 "peer_address": { 00:11:06.377 "trtype": "TCP", 00:11:06.377 "adrfam": "IPv4", 00:11:06.377 "traddr": "10.0.0.1", 00:11:06.377 "trsvcid": "50164" 00:11:06.377 }, 00:11:06.377 "auth": { 00:11:06.377 "state": "completed", 00:11:06.377 "digest": "sha384", 00:11:06.377 "dhgroup": "ffdhe4096" 00:11:06.377 } 00:11:06.377 } 00:11:06.377 ]' 00:11:06.377 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:06.636 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:06.636 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:06.636 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:06.637 16:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:06.637 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.637 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.637 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.895 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:11:06.895 16:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:11:07.831 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.832 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:07.832 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.832 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.832 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.832 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:07.832 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:07.832 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:07.832 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:11:07.832 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:07.832 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:07.832 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:07.832 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:07.832 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.832 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key3 00:11:07.832 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.832 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.832 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.832 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:07.832 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:07.832 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:08.399 00:11:08.399 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:08.399 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.399 16:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:08.657 16:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.657 16:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.657 16:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.657 16:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.658 16:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.658 16:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:08.658 { 00:11:08.658 "cntlid": 79, 00:11:08.658 "qid": 0, 00:11:08.658 "state": "enabled", 00:11:08.658 "thread": "nvmf_tgt_poll_group_000", 00:11:08.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:08.658 "listen_address": { 00:11:08.658 "trtype": "TCP", 00:11:08.658 "adrfam": "IPv4", 00:11:08.658 "traddr": "10.0.0.3", 00:11:08.658 "trsvcid": "4420" 00:11:08.658 }, 00:11:08.658 "peer_address": { 00:11:08.658 "trtype": "TCP", 00:11:08.658 "adrfam": "IPv4", 00:11:08.658 "traddr": "10.0.0.1", 00:11:08.658 "trsvcid": "50196" 00:11:08.658 }, 00:11:08.658 "auth": { 00:11:08.658 "state": "completed", 00:11:08.658 "digest": "sha384", 00:11:08.658 "dhgroup": "ffdhe4096" 00:11:08.658 } 00:11:08.658 } 00:11:08.658 ]' 00:11:08.658 16:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:08.658 16:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:08.658 16:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:08.658 16:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:08.658 16:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:08.658 16:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.658 16:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.658 16:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.226 16:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:11:09.226 16:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:11:09.793 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.793 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:09.793 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.793 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.793 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.793 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:09.793 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:09.793 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:09.793 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:10.053 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:11:10.053 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:10.053 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:10.053 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:10.053 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:10.053 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.053 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.053 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.053 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.053 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.053 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.053 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.053 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.311 00:11:10.312 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:10.312 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:10.312 16:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.570 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.570 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.570 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.570 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.570 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.570 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:10.570 { 00:11:10.570 "cntlid": 81, 00:11:10.570 "qid": 0, 00:11:10.570 "state": "enabled", 00:11:10.570 "thread": "nvmf_tgt_poll_group_000", 00:11:10.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:10.570 "listen_address": { 00:11:10.570 "trtype": "TCP", 00:11:10.570 "adrfam": "IPv4", 00:11:10.570 "traddr": "10.0.0.3", 00:11:10.570 "trsvcid": "4420" 00:11:10.570 }, 00:11:10.570 "peer_address": { 00:11:10.570 "trtype": "TCP", 00:11:10.570 "adrfam": "IPv4", 00:11:10.570 "traddr": "10.0.0.1", 00:11:10.570 "trsvcid": "51898" 00:11:10.570 }, 00:11:10.570 "auth": { 00:11:10.570 "state": "completed", 00:11:10.570 "digest": "sha384", 00:11:10.570 "dhgroup": "ffdhe6144" 00:11:10.570 } 00:11:10.570 } 00:11:10.570 ]' 00:11:10.570 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:10.842 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:10.842 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:10.842 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:10.842 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:10.842 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.842 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.842 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.119 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:11:11.119 16:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:11:11.687 16:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.687 16:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:11.687 16:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.687 16:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.687 16:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.687 16:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:11.687 16:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:11.687 16:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:11.946 16:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:11:11.946 16:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:11.946 16:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:11.946 16:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:11.946 16:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:11.946 16:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.946 16:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.946 16:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.946 16:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.946 16:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.946 16:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.946 16:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.946 16:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.515 00:11:12.515 16:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:12.515 16:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:12.515 16:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.789 16:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.789 16:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.789 16:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.789 16:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.789 16:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.789 16:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:12.789 { 00:11:12.789 "cntlid": 83, 00:11:12.789 "qid": 0, 00:11:12.789 "state": "enabled", 00:11:12.789 "thread": "nvmf_tgt_poll_group_000", 00:11:12.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:12.789 "listen_address": { 00:11:12.789 "trtype": "TCP", 00:11:12.789 "adrfam": "IPv4", 00:11:12.789 "traddr": "10.0.0.3", 00:11:12.789 "trsvcid": "4420" 00:11:12.789 }, 00:11:12.789 "peer_address": { 00:11:12.789 "trtype": "TCP", 00:11:12.789 "adrfam": "IPv4", 00:11:12.789 "traddr": "10.0.0.1", 00:11:12.789 "trsvcid": "51914" 00:11:12.789 }, 00:11:12.789 "auth": { 00:11:12.789 "state": "completed", 00:11:12.789 "digest": "sha384", 00:11:12.789 "dhgroup": "ffdhe6144" 00:11:12.789 } 00:11:12.789 } 00:11:12.789 ]' 00:11:12.789 16:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:12.789 16:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:12.789 16:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:12.789 16:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:12.789 16:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:13.055 16:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.055 16:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.055 16:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.313 16:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:11:13.314 16:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:11:13.880 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.880 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:13.880 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.880 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.880 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.880 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:13.880 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:13.880 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:14.138 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:11:14.138 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:14.138 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:14.138 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:14.138 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:14.138 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.138 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.138 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.138 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.138 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.138 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.138 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.138 16:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.706 00:11:14.706 16:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:14.706 16:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:14.706 16:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.965 16:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.965 16:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.965 16:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.965 16:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.965 16:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.965 16:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:14.965 { 00:11:14.965 "cntlid": 85, 00:11:14.965 "qid": 0, 00:11:14.965 "state": "enabled", 00:11:14.965 "thread": "nvmf_tgt_poll_group_000", 00:11:14.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:14.965 "listen_address": { 00:11:14.965 "trtype": "TCP", 00:11:14.965 "adrfam": "IPv4", 00:11:14.965 "traddr": "10.0.0.3", 00:11:14.965 "trsvcid": "4420" 00:11:14.965 }, 00:11:14.965 "peer_address": { 00:11:14.965 "trtype": "TCP", 00:11:14.965 "adrfam": "IPv4", 00:11:14.965 "traddr": "10.0.0.1", 00:11:14.965 "trsvcid": "51948" 00:11:14.965 }, 00:11:14.965 "auth": { 00:11:14.965 "state": "completed", 00:11:14.965 "digest": "sha384", 00:11:14.965 "dhgroup": "ffdhe6144" 00:11:14.965 } 00:11:14.965 } 00:11:14.965 ]' 00:11:14.965 16:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:14.965 16:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:14.965 16:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:14.965 16:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:14.965 16:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:14.965 16:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.965 16:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.965 16:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.533 16:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:11:15.533 16:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:11:16.100 16:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.100 16:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:16.100 16:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.100 16:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.100 16:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.100 16:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:16.100 16:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:16.100 16:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:16.358 16:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:11:16.358 16:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:16.358 16:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:16.359 16:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:16.359 16:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:16.359 16:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.359 16:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key3 00:11:16.359 16:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.359 16:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.359 16:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.359 16:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:16.359 16:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:16.359 16:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:16.926 00:11:16.926 16:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:16.926 16:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.926 16:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:17.185 16:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.185 16:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.185 16:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.185 16:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.185 16:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.185 16:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:17.185 { 00:11:17.185 "cntlid": 87, 00:11:17.185 "qid": 0, 00:11:17.185 "state": "enabled", 00:11:17.185 "thread": "nvmf_tgt_poll_group_000", 00:11:17.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:17.185 "listen_address": { 00:11:17.185 "trtype": "TCP", 00:11:17.185 "adrfam": "IPv4", 00:11:17.185 "traddr": "10.0.0.3", 00:11:17.185 "trsvcid": "4420" 00:11:17.185 }, 00:11:17.185 "peer_address": { 00:11:17.185 "trtype": "TCP", 00:11:17.185 "adrfam": "IPv4", 00:11:17.185 "traddr": "10.0.0.1", 00:11:17.185 "trsvcid": "51992" 00:11:17.185 }, 00:11:17.185 "auth": { 00:11:17.185 "state": "completed", 00:11:17.185 "digest": "sha384", 00:11:17.185 "dhgroup": "ffdhe6144" 00:11:17.185 } 00:11:17.185 } 00:11:17.185 ]' 00:11:17.185 16:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:17.185 16:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:17.185 16:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:17.185 16:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:17.185 16:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:17.185 16:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.185 16:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.185 16:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.444 16:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:11:17.444 16:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:11:18.012 16:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.013 16:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:18.013 16:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.013 16:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.013 16:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.013 16:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:18.013 16:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:18.013 16:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:18.013 16:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:18.272 16:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:11:18.272 16:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:18.272 16:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:18.272 16:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:18.272 16:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:18.272 16:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.272 16:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.272 16:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.272 16:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.272 16:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.272 16:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.272 16:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.272 16:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.840 00:11:19.099 16:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:19.099 16:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.099 16:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:19.359 16:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.359 16:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.359 16:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.359 16:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.359 16:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.359 16:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:19.359 { 00:11:19.359 "cntlid": 89, 00:11:19.359 "qid": 0, 00:11:19.359 "state": "enabled", 00:11:19.359 "thread": "nvmf_tgt_poll_group_000", 00:11:19.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:19.359 "listen_address": { 00:11:19.359 "trtype": "TCP", 00:11:19.359 "adrfam": "IPv4", 00:11:19.359 "traddr": "10.0.0.3", 00:11:19.359 "trsvcid": "4420" 00:11:19.359 }, 00:11:19.359 "peer_address": { 00:11:19.359 "trtype": "TCP", 00:11:19.359 "adrfam": "IPv4", 00:11:19.359 "traddr": "10.0.0.1", 00:11:19.359 "trsvcid": "52014" 00:11:19.359 }, 00:11:19.359 "auth": { 00:11:19.359 "state": "completed", 00:11:19.359 "digest": "sha384", 00:11:19.359 "dhgroup": "ffdhe8192" 00:11:19.359 } 00:11:19.359 } 00:11:19.359 ]' 00:11:19.359 16:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:19.359 16:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:19.359 16:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:19.359 16:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:19.359 16:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:19.359 16:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.359 16:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.359 16:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.619 16:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:11:19.619 16:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:11:20.556 16:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.556 16:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:20.556 16:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.556 16:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.556 16:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.556 16:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:20.556 16:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:20.556 16:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:20.557 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:11:20.557 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:20.557 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:20.557 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:20.557 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:20.557 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.557 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.557 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.557 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.557 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.557 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.557 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.557 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.492 00:11:21.492 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:21.492 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:21.492 16:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.751 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.751 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.751 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.751 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.751 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.751 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:21.751 { 00:11:21.751 "cntlid": 91, 00:11:21.751 "qid": 0, 00:11:21.751 "state": "enabled", 00:11:21.751 "thread": "nvmf_tgt_poll_group_000", 00:11:21.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:21.751 "listen_address": { 00:11:21.751 "trtype": "TCP", 00:11:21.751 "adrfam": "IPv4", 00:11:21.751 "traddr": "10.0.0.3", 00:11:21.751 "trsvcid": "4420" 00:11:21.751 }, 00:11:21.751 "peer_address": { 00:11:21.751 "trtype": "TCP", 00:11:21.751 "adrfam": "IPv4", 00:11:21.751 "traddr": "10.0.0.1", 00:11:21.751 "trsvcid": "32862" 00:11:21.751 }, 00:11:21.751 "auth": { 00:11:21.751 "state": "completed", 00:11:21.751 "digest": "sha384", 00:11:21.751 "dhgroup": "ffdhe8192" 00:11:21.751 } 00:11:21.751 } 00:11:21.751 ]' 00:11:21.751 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:21.751 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:21.751 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:21.751 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:21.751 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:21.751 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.751 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.751 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.010 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:11:22.010 16:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:11:22.576 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.576 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:22.576 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.576 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.576 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.576 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:22.576 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:22.576 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:23.143 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:11:23.143 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:23.143 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:23.143 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:23.143 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:23.143 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.143 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.143 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.143 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.143 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.143 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.143 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.143 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.711 00:11:23.711 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:23.711 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:23.711 16:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.711 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.711 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.711 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.711 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.711 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.711 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:23.711 { 00:11:23.711 "cntlid": 93, 00:11:23.711 "qid": 0, 00:11:23.711 "state": "enabled", 00:11:23.711 "thread": "nvmf_tgt_poll_group_000", 00:11:23.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:23.711 "listen_address": { 00:11:23.711 "trtype": "TCP", 00:11:23.711 "adrfam": "IPv4", 00:11:23.711 "traddr": "10.0.0.3", 00:11:23.711 "trsvcid": "4420" 00:11:23.711 }, 00:11:23.711 "peer_address": { 00:11:23.711 "trtype": "TCP", 00:11:23.712 "adrfam": "IPv4", 00:11:23.712 "traddr": "10.0.0.1", 00:11:23.712 "trsvcid": "32890" 00:11:23.712 }, 00:11:23.712 "auth": { 00:11:23.712 "state": "completed", 00:11:23.712 "digest": "sha384", 00:11:23.712 "dhgroup": "ffdhe8192" 00:11:23.712 } 00:11:23.712 } 00:11:23.712 ]' 00:11:23.712 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:23.971 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:23.971 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:23.971 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:23.971 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:23.971 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.971 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.971 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.230 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:11:24.230 16:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:11:25.167 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.167 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:25.167 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.167 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.167 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.167 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:25.167 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:25.167 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:25.167 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:11:25.167 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:25.167 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:25.167 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:25.167 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:25.167 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.167 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key3 00:11:25.167 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.167 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.167 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.167 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:25.167 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:25.167 16:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:26.104 00:11:26.104 16:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:26.104 16:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:26.104 16:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.104 16:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.104 16:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.104 16:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.104 16:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.104 16:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.104 16:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:26.104 { 00:11:26.104 "cntlid": 95, 00:11:26.104 "qid": 0, 00:11:26.104 "state": "enabled", 00:11:26.104 "thread": "nvmf_tgt_poll_group_000", 00:11:26.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:26.104 "listen_address": { 00:11:26.104 "trtype": "TCP", 00:11:26.104 "adrfam": "IPv4", 00:11:26.104 "traddr": "10.0.0.3", 00:11:26.104 "trsvcid": "4420" 00:11:26.104 }, 00:11:26.104 "peer_address": { 00:11:26.104 "trtype": "TCP", 00:11:26.104 "adrfam": "IPv4", 00:11:26.104 "traddr": "10.0.0.1", 00:11:26.104 "trsvcid": "32910" 00:11:26.104 }, 00:11:26.104 "auth": { 00:11:26.104 "state": "completed", 00:11:26.104 "digest": "sha384", 00:11:26.104 "dhgroup": "ffdhe8192" 00:11:26.104 } 00:11:26.104 } 00:11:26.104 ]' 00:11:26.104 16:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:26.367 16:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:26.367 16:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:26.367 16:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:26.367 16:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:26.367 16:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.367 16:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.367 16:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.626 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:11:26.626 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:11:27.193 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.193 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:27.193 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.193 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.193 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.193 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:27.193 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:27.193 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:27.193 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:27.193 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:27.453 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:11:27.453 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:27.453 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:27.453 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:27.453 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:27.453 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.453 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.453 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.453 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.453 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.453 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.453 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.453 16:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.712 00:11:27.971 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:27.971 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:27.971 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.230 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.230 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.230 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.230 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.230 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.230 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:28.230 { 00:11:28.230 "cntlid": 97, 00:11:28.230 "qid": 0, 00:11:28.230 "state": "enabled", 00:11:28.230 "thread": "nvmf_tgt_poll_group_000", 00:11:28.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:28.230 "listen_address": { 00:11:28.230 "trtype": "TCP", 00:11:28.230 "adrfam": "IPv4", 00:11:28.230 "traddr": "10.0.0.3", 00:11:28.230 "trsvcid": "4420" 00:11:28.230 }, 00:11:28.230 "peer_address": { 00:11:28.230 "trtype": "TCP", 00:11:28.230 "adrfam": "IPv4", 00:11:28.230 "traddr": "10.0.0.1", 00:11:28.230 "trsvcid": "32932" 00:11:28.230 }, 00:11:28.230 "auth": { 00:11:28.230 "state": "completed", 00:11:28.230 "digest": "sha512", 00:11:28.230 "dhgroup": "null" 00:11:28.230 } 00:11:28.230 } 00:11:28.230 ]' 00:11:28.230 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:28.230 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:28.230 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:28.230 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:28.230 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:28.230 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.230 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.230 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.489 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:11:28.489 16:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:11:29.056 16:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.056 16:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:29.056 16:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.056 16:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.056 16:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.056 16:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:29.056 16:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:29.056 16:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:29.315 16:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:11:29.315 16:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:29.315 16:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:29.315 16:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:29.315 16:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:29.315 16:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.315 16:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.315 16:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.315 16:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.315 16:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.316 16:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.316 16:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.316 16:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.575 00:11:29.575 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:29.575 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:29.575 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.834 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.834 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.834 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.834 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.834 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.834 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.834 { 00:11:29.834 "cntlid": 99, 00:11:29.834 "qid": 0, 00:11:29.834 "state": "enabled", 00:11:29.834 "thread": "nvmf_tgt_poll_group_000", 00:11:29.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:29.834 "listen_address": { 00:11:29.834 "trtype": "TCP", 00:11:29.834 "adrfam": "IPv4", 00:11:29.834 "traddr": "10.0.0.3", 00:11:29.834 "trsvcid": "4420" 00:11:29.834 }, 00:11:29.834 "peer_address": { 00:11:29.834 "trtype": "TCP", 00:11:29.834 "adrfam": "IPv4", 00:11:29.834 "traddr": "10.0.0.1", 00:11:29.834 "trsvcid": "49618" 00:11:29.834 }, 00:11:29.834 "auth": { 00:11:29.834 "state": "completed", 00:11:29.834 "digest": "sha512", 00:11:29.834 "dhgroup": "null" 00:11:29.834 } 00:11:29.834 } 00:11:29.834 ]' 00:11:29.834 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:29.834 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:29.834 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:30.094 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:30.094 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:30.094 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.094 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.094 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.354 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:11:30.354 16:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:11:30.922 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.922 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:30.922 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.922 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.922 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.922 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:30.922 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:30.922 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:31.181 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:11:31.181 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:31.181 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:31.181 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:31.181 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:31.181 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.181 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.181 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.181 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.181 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.181 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.181 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.181 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.440 00:11:31.440 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:31.440 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.440 16:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:31.699 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.699 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.699 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.699 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.699 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.699 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:31.699 { 00:11:31.699 "cntlid": 101, 00:11:31.699 "qid": 0, 00:11:31.699 "state": "enabled", 00:11:31.699 "thread": "nvmf_tgt_poll_group_000", 00:11:31.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:31.699 "listen_address": { 00:11:31.699 "trtype": "TCP", 00:11:31.699 "adrfam": "IPv4", 00:11:31.699 "traddr": "10.0.0.3", 00:11:31.699 "trsvcid": "4420" 00:11:31.699 }, 00:11:31.699 "peer_address": { 00:11:31.699 "trtype": "TCP", 00:11:31.699 "adrfam": "IPv4", 00:11:31.699 "traddr": "10.0.0.1", 00:11:31.699 "trsvcid": "49642" 00:11:31.699 }, 00:11:31.699 "auth": { 00:11:31.699 "state": "completed", 00:11:31.699 "digest": "sha512", 00:11:31.699 "dhgroup": "null" 00:11:31.699 } 00:11:31.699 } 00:11:31.699 ]' 00:11:31.699 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:31.958 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:31.958 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:31.958 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:31.958 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:31.958 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.958 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.958 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.218 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:11:32.218 16:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:11:32.785 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.785 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:32.785 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.785 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.785 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.785 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:32.785 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:32.785 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:33.045 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:11:33.045 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:33.045 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:33.045 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:33.045 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:33.045 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.045 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key3 00:11:33.045 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.045 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.045 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.045 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:33.045 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:33.045 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:33.304 00:11:33.304 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:33.304 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:33.304 16:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.873 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.873 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.873 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.873 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.873 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.873 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:33.873 { 00:11:33.873 "cntlid": 103, 00:11:33.873 "qid": 0, 00:11:33.873 "state": "enabled", 00:11:33.873 "thread": "nvmf_tgt_poll_group_000", 00:11:33.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:33.873 "listen_address": { 00:11:33.873 "trtype": "TCP", 00:11:33.873 "adrfam": "IPv4", 00:11:33.873 "traddr": "10.0.0.3", 00:11:33.873 "trsvcid": "4420" 00:11:33.873 }, 00:11:33.873 "peer_address": { 00:11:33.873 "trtype": "TCP", 00:11:33.873 "adrfam": "IPv4", 00:11:33.873 "traddr": "10.0.0.1", 00:11:33.873 "trsvcid": "49678" 00:11:33.873 }, 00:11:33.873 "auth": { 00:11:33.873 "state": "completed", 00:11:33.873 "digest": "sha512", 00:11:33.873 "dhgroup": "null" 00:11:33.873 } 00:11:33.873 } 00:11:33.873 ]' 00:11:33.873 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:33.873 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:33.873 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.873 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:33.873 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:33.873 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.873 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.873 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.133 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:11:34.133 16:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:11:34.711 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.711 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:34.711 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.711 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.711 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.711 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:34.711 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:34.711 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:34.711 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:34.986 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:11:34.986 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:34.986 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:34.986 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:34.986 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:34.986 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.986 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.986 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.986 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.986 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.986 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.986 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.987 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.555 00:11:35.555 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:35.555 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.555 16:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.814 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.814 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.814 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.814 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.814 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.814 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:35.814 { 00:11:35.814 "cntlid": 105, 00:11:35.814 "qid": 0, 00:11:35.814 "state": "enabled", 00:11:35.814 "thread": "nvmf_tgt_poll_group_000", 00:11:35.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:35.814 "listen_address": { 00:11:35.814 "trtype": "TCP", 00:11:35.814 "adrfam": "IPv4", 00:11:35.814 "traddr": "10.0.0.3", 00:11:35.814 "trsvcid": "4420" 00:11:35.814 }, 00:11:35.814 "peer_address": { 00:11:35.814 "trtype": "TCP", 00:11:35.814 "adrfam": "IPv4", 00:11:35.814 "traddr": "10.0.0.1", 00:11:35.814 "trsvcid": "49696" 00:11:35.814 }, 00:11:35.814 "auth": { 00:11:35.814 "state": "completed", 00:11:35.814 "digest": "sha512", 00:11:35.814 "dhgroup": "ffdhe2048" 00:11:35.814 } 00:11:35.814 } 00:11:35.814 ]' 00:11:35.814 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:35.814 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:35.814 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:35.814 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:35.814 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:35.814 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.814 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.814 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.073 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:11:36.073 16:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:11:37.009 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.009 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:37.009 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.009 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.009 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.009 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:37.010 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:37.010 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:37.010 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:11:37.010 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.010 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:37.010 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:37.010 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:37.010 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.010 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.010 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.010 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.010 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.010 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.010 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.010 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.577 00:11:37.577 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:37.577 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:37.577 16:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.836 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.836 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.836 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.836 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.836 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.836 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:37.836 { 00:11:37.836 "cntlid": 107, 00:11:37.836 "qid": 0, 00:11:37.836 "state": "enabled", 00:11:37.836 "thread": "nvmf_tgt_poll_group_000", 00:11:37.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:37.836 "listen_address": { 00:11:37.836 "trtype": "TCP", 00:11:37.836 "adrfam": "IPv4", 00:11:37.836 "traddr": "10.0.0.3", 00:11:37.836 "trsvcid": "4420" 00:11:37.836 }, 00:11:37.836 "peer_address": { 00:11:37.836 "trtype": "TCP", 00:11:37.836 "adrfam": "IPv4", 00:11:37.836 "traddr": "10.0.0.1", 00:11:37.836 "trsvcid": "49736" 00:11:37.836 }, 00:11:37.836 "auth": { 00:11:37.836 "state": "completed", 00:11:37.836 "digest": "sha512", 00:11:37.836 "dhgroup": "ffdhe2048" 00:11:37.836 } 00:11:37.836 } 00:11:37.836 ]' 00:11:37.836 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:37.836 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:37.836 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:37.836 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:37.836 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:37.836 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.836 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.836 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.095 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:11:38.096 16:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:11:39.031 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.031 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:39.031 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.031 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.031 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.031 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.031 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:39.031 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:39.290 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:11:39.290 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.290 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:39.290 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:39.290 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:39.290 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.290 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.290 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.290 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.290 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.290 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.290 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.290 16:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.550 00:11:39.550 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:39.550 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:39.550 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.809 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.809 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.809 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.809 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.809 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.809 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:39.809 { 00:11:39.809 "cntlid": 109, 00:11:39.809 "qid": 0, 00:11:39.809 "state": "enabled", 00:11:39.809 "thread": "nvmf_tgt_poll_group_000", 00:11:39.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:39.809 "listen_address": { 00:11:39.809 "trtype": "TCP", 00:11:39.809 "adrfam": "IPv4", 00:11:39.809 "traddr": "10.0.0.3", 00:11:39.809 "trsvcid": "4420" 00:11:39.809 }, 00:11:39.809 "peer_address": { 00:11:39.809 "trtype": "TCP", 00:11:39.809 "adrfam": "IPv4", 00:11:39.809 "traddr": "10.0.0.1", 00:11:39.809 "trsvcid": "49762" 00:11:39.809 }, 00:11:39.809 "auth": { 00:11:39.809 "state": "completed", 00:11:39.809 "digest": "sha512", 00:11:39.809 "dhgroup": "ffdhe2048" 00:11:39.809 } 00:11:39.809 } 00:11:39.809 ]' 00:11:39.809 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.068 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:40.068 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.068 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:40.068 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.068 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.068 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.068 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.327 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:11:40.327 16:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:11:40.899 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.899 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:40.899 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.899 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.158 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.158 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.158 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:41.158 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:41.417 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:11:41.417 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.417 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:41.417 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:41.417 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:41.417 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.417 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key3 00:11:41.417 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.417 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.417 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.417 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:41.417 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:41.417 16:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:41.678 00:11:41.678 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:41.678 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:41.678 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.938 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.938 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.938 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.938 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.938 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.938 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:41.938 { 00:11:41.938 "cntlid": 111, 00:11:41.938 "qid": 0, 00:11:41.938 "state": "enabled", 00:11:41.938 "thread": "nvmf_tgt_poll_group_000", 00:11:41.938 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:41.938 "listen_address": { 00:11:41.938 "trtype": "TCP", 00:11:41.938 "adrfam": "IPv4", 00:11:41.938 "traddr": "10.0.0.3", 00:11:41.938 "trsvcid": "4420" 00:11:41.938 }, 00:11:41.938 "peer_address": { 00:11:41.938 "trtype": "TCP", 00:11:41.938 "adrfam": "IPv4", 00:11:41.938 "traddr": "10.0.0.1", 00:11:41.938 "trsvcid": "51114" 00:11:41.938 }, 00:11:41.938 "auth": { 00:11:41.938 "state": "completed", 00:11:41.938 "digest": "sha512", 00:11:41.938 "dhgroup": "ffdhe2048" 00:11:41.938 } 00:11:41.938 } 00:11:41.938 ]' 00:11:41.938 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:41.938 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:41.938 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:41.938 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:41.938 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:42.198 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.198 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.198 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.456 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:11:42.457 16:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:11:43.026 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.026 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:43.026 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.026 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.026 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.026 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:43.026 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.026 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:43.026 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:43.336 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:11:43.336 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.336 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:43.336 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:43.336 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:43.336 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.336 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.336 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.336 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.336 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.336 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.336 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.336 16:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.906 00:11:43.906 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:43.906 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.906 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.166 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.166 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.166 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.166 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.166 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.166 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.166 { 00:11:44.166 "cntlid": 113, 00:11:44.166 "qid": 0, 00:11:44.166 "state": "enabled", 00:11:44.166 "thread": "nvmf_tgt_poll_group_000", 00:11:44.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:44.166 "listen_address": { 00:11:44.166 "trtype": "TCP", 00:11:44.166 "adrfam": "IPv4", 00:11:44.166 "traddr": "10.0.0.3", 00:11:44.166 "trsvcid": "4420" 00:11:44.166 }, 00:11:44.166 "peer_address": { 00:11:44.166 "trtype": "TCP", 00:11:44.166 "adrfam": "IPv4", 00:11:44.166 "traddr": "10.0.0.1", 00:11:44.166 "trsvcid": "51132" 00:11:44.166 }, 00:11:44.166 "auth": { 00:11:44.166 "state": "completed", 00:11:44.166 "digest": "sha512", 00:11:44.166 "dhgroup": "ffdhe3072" 00:11:44.166 } 00:11:44.166 } 00:11:44.166 ]' 00:11:44.166 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.166 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:44.166 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.166 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:44.166 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.166 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.166 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.166 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.426 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:11:44.426 16:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:11:45.364 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.364 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:45.364 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.364 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.364 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.364 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.364 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:45.364 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:45.364 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:11:45.364 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:45.364 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:45.364 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:45.364 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:45.364 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.364 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.364 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.364 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.364 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.364 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.364 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.364 16:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.933 00:11:45.933 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:45.933 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:45.933 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.192 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.192 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.192 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.192 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.192 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.192 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.192 { 00:11:46.192 "cntlid": 115, 00:11:46.192 "qid": 0, 00:11:46.192 "state": "enabled", 00:11:46.192 "thread": "nvmf_tgt_poll_group_000", 00:11:46.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:46.192 "listen_address": { 00:11:46.192 "trtype": "TCP", 00:11:46.192 "adrfam": "IPv4", 00:11:46.192 "traddr": "10.0.0.3", 00:11:46.192 "trsvcid": "4420" 00:11:46.192 }, 00:11:46.192 "peer_address": { 00:11:46.192 "trtype": "TCP", 00:11:46.192 "adrfam": "IPv4", 00:11:46.192 "traddr": "10.0.0.1", 00:11:46.192 "trsvcid": "51152" 00:11:46.192 }, 00:11:46.192 "auth": { 00:11:46.192 "state": "completed", 00:11:46.192 "digest": "sha512", 00:11:46.192 "dhgroup": "ffdhe3072" 00:11:46.192 } 00:11:46.192 } 00:11:46.192 ]' 00:11:46.192 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:46.192 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:46.192 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:46.192 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:46.192 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.192 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.192 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.192 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.762 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:11:46.762 16:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:11:47.331 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.331 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:47.331 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.331 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.331 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.331 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.331 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:47.331 16:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:47.591 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:11:47.591 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.591 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:47.591 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:47.591 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:47.591 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.591 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.591 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.591 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.591 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.591 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.591 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.591 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.161 00:11:48.161 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.161 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:48.161 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.421 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.421 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.421 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.421 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.421 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.421 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:48.421 { 00:11:48.421 "cntlid": 117, 00:11:48.421 "qid": 0, 00:11:48.421 "state": "enabled", 00:11:48.421 "thread": "nvmf_tgt_poll_group_000", 00:11:48.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:48.421 "listen_address": { 00:11:48.421 "trtype": "TCP", 00:11:48.421 "adrfam": "IPv4", 00:11:48.421 "traddr": "10.0.0.3", 00:11:48.421 "trsvcid": "4420" 00:11:48.421 }, 00:11:48.421 "peer_address": { 00:11:48.421 "trtype": "TCP", 00:11:48.421 "adrfam": "IPv4", 00:11:48.421 "traddr": "10.0.0.1", 00:11:48.421 "trsvcid": "51190" 00:11:48.421 }, 00:11:48.421 "auth": { 00:11:48.421 "state": "completed", 00:11:48.421 "digest": "sha512", 00:11:48.421 "dhgroup": "ffdhe3072" 00:11:48.421 } 00:11:48.421 } 00:11:48.421 ]' 00:11:48.421 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.421 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:48.421 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:48.421 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:48.421 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:48.421 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.421 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.421 16:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.680 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:11:48.680 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:11:49.616 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.616 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:49.616 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.616 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.616 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.616 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:49.616 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:49.616 16:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:49.876 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:11:49.876 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:49.876 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:49.876 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:49.876 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:49.876 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.876 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key3 00:11:49.876 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.876 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.876 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.876 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:49.876 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:49.876 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:50.138 00:11:50.138 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.138 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.138 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.397 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.397 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.397 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.397 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.657 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.657 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.657 { 00:11:50.657 "cntlid": 119, 00:11:50.657 "qid": 0, 00:11:50.657 "state": "enabled", 00:11:50.657 "thread": "nvmf_tgt_poll_group_000", 00:11:50.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:50.657 "listen_address": { 00:11:50.657 "trtype": "TCP", 00:11:50.657 "adrfam": "IPv4", 00:11:50.657 "traddr": "10.0.0.3", 00:11:50.657 "trsvcid": "4420" 00:11:50.657 }, 00:11:50.657 "peer_address": { 00:11:50.657 "trtype": "TCP", 00:11:50.657 "adrfam": "IPv4", 00:11:50.657 "traddr": "10.0.0.1", 00:11:50.657 "trsvcid": "32772" 00:11:50.657 }, 00:11:50.657 "auth": { 00:11:50.657 "state": "completed", 00:11:50.657 "digest": "sha512", 00:11:50.657 "dhgroup": "ffdhe3072" 00:11:50.657 } 00:11:50.657 } 00:11:50.657 ]' 00:11:50.657 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:50.657 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:50.657 16:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.657 16:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:50.657 16:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:50.657 16:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.658 16:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.658 16:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.916 16:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:11:50.916 16:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:11:51.853 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.854 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:51.854 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.854 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.854 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.854 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:51.854 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:51.854 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:51.854 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:52.113 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:11:52.113 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.113 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:52.113 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:52.113 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:52.113 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.113 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.113 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.113 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.113 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.113 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.113 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.113 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.373 00:11:52.373 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:52.373 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:52.373 16:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.632 16:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.632 16:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.632 16:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.632 16:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.632 16:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.632 16:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:52.632 { 00:11:52.632 "cntlid": 121, 00:11:52.632 "qid": 0, 00:11:52.632 "state": "enabled", 00:11:52.632 "thread": "nvmf_tgt_poll_group_000", 00:11:52.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:52.632 "listen_address": { 00:11:52.632 "trtype": "TCP", 00:11:52.632 "adrfam": "IPv4", 00:11:52.632 "traddr": "10.0.0.3", 00:11:52.632 "trsvcid": "4420" 00:11:52.632 }, 00:11:52.632 "peer_address": { 00:11:52.632 "trtype": "TCP", 00:11:52.632 "adrfam": "IPv4", 00:11:52.632 "traddr": "10.0.0.1", 00:11:52.632 "trsvcid": "32806" 00:11:52.632 }, 00:11:52.632 "auth": { 00:11:52.632 "state": "completed", 00:11:52.632 "digest": "sha512", 00:11:52.632 "dhgroup": "ffdhe4096" 00:11:52.632 } 00:11:52.632 } 00:11:52.632 ]' 00:11:52.632 16:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:52.633 16:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:52.633 16:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.633 16:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:52.633 16:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.892 16:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.892 16:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.892 16:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.151 16:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:11:53.151 16:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:11:53.721 16:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.721 16:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:53.721 16:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.721 16:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.721 16:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.721 16:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:53.721 16:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:53.721 16:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:53.980 16:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:11:53.980 16:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:53.980 16:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:53.980 16:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:53.980 16:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:53.980 16:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.980 16:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.980 16:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.980 16:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.980 16:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.980 16:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.980 16:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.980 16:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.549 00:11:54.549 16:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.549 16:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:54.549 16:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.808 16:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.808 16:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.808 16:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.808 16:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.808 16:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.808 16:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:54.808 { 00:11:54.808 "cntlid": 123, 00:11:54.808 "qid": 0, 00:11:54.808 "state": "enabled", 00:11:54.808 "thread": "nvmf_tgt_poll_group_000", 00:11:54.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:54.808 "listen_address": { 00:11:54.808 "trtype": "TCP", 00:11:54.808 "adrfam": "IPv4", 00:11:54.808 "traddr": "10.0.0.3", 00:11:54.808 "trsvcid": "4420" 00:11:54.808 }, 00:11:54.808 "peer_address": { 00:11:54.808 "trtype": "TCP", 00:11:54.808 "adrfam": "IPv4", 00:11:54.808 "traddr": "10.0.0.1", 00:11:54.808 "trsvcid": "32840" 00:11:54.808 }, 00:11:54.808 "auth": { 00:11:54.808 "state": "completed", 00:11:54.808 "digest": "sha512", 00:11:54.808 "dhgroup": "ffdhe4096" 00:11:54.808 } 00:11:54.808 } 00:11:54.808 ]' 00:11:54.808 16:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:54.808 16:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:54.808 16:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:54.808 16:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:54.808 16:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:54.808 16:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.808 16:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.808 16:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.377 16:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:11:55.377 16:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:11:55.945 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.945 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:55.945 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.945 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.945 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.945 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:55.945 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:55.945 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:56.204 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:11:56.204 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:56.204 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:56.204 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:56.204 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:56.204 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.204 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.204 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.205 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.205 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.205 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.205 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.205 16:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.774 00:11:56.774 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:56.774 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:56.774 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.033 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.033 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.033 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.033 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.033 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.033 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:57.033 { 00:11:57.033 "cntlid": 125, 00:11:57.033 "qid": 0, 00:11:57.033 "state": "enabled", 00:11:57.033 "thread": "nvmf_tgt_poll_group_000", 00:11:57.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:57.033 "listen_address": { 00:11:57.033 "trtype": "TCP", 00:11:57.033 "adrfam": "IPv4", 00:11:57.033 "traddr": "10.0.0.3", 00:11:57.033 "trsvcid": "4420" 00:11:57.033 }, 00:11:57.033 "peer_address": { 00:11:57.033 "trtype": "TCP", 00:11:57.033 "adrfam": "IPv4", 00:11:57.033 "traddr": "10.0.0.1", 00:11:57.033 "trsvcid": "32868" 00:11:57.033 }, 00:11:57.033 "auth": { 00:11:57.033 "state": "completed", 00:11:57.033 "digest": "sha512", 00:11:57.033 "dhgroup": "ffdhe4096" 00:11:57.033 } 00:11:57.033 } 00:11:57.033 ]' 00:11:57.033 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:57.033 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:57.033 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:57.033 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:57.033 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:57.033 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.033 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.033 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.600 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:11:57.600 16:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:11:58.167 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.167 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:11:58.167 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.167 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.167 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.167 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:58.167 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:58.167 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:58.426 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:11:58.426 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:58.426 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:58.426 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:58.426 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:58.426 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.426 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key3 00:11:58.426 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.426 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.426 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.426 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:58.426 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:58.426 16:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:58.684 00:11:58.684 16:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:58.684 16:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.684 16:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:59.251 16:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.251 16:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.251 16:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.251 16:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.251 16:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.251 16:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:59.251 { 00:11:59.251 "cntlid": 127, 00:11:59.251 "qid": 0, 00:11:59.251 "state": "enabled", 00:11:59.251 "thread": "nvmf_tgt_poll_group_000", 00:11:59.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:11:59.251 "listen_address": { 00:11:59.251 "trtype": "TCP", 00:11:59.251 "adrfam": "IPv4", 00:11:59.251 "traddr": "10.0.0.3", 00:11:59.251 "trsvcid": "4420" 00:11:59.251 }, 00:11:59.251 "peer_address": { 00:11:59.251 "trtype": "TCP", 00:11:59.251 "adrfam": "IPv4", 00:11:59.251 "traddr": "10.0.0.1", 00:11:59.251 "trsvcid": "32894" 00:11:59.251 }, 00:11:59.251 "auth": { 00:11:59.251 "state": "completed", 00:11:59.251 "digest": "sha512", 00:11:59.251 "dhgroup": "ffdhe4096" 00:11:59.251 } 00:11:59.251 } 00:11:59.251 ]' 00:11:59.251 16:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:59.251 16:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:59.251 16:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:59.251 16:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:59.251 16:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:59.251 16:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.251 16:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.251 16:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.510 16:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:11:59.510 16:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:12:00.078 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.078 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:12:00.078 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.078 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.078 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.078 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:00.078 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:00.078 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:00.078 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:00.646 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:12:00.646 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:00.646 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:00.646 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:00.646 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:00.646 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.646 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.646 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.646 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.646 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.646 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.646 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.646 16:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.904 00:12:00.904 16:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:00.904 16:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.904 16:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:01.164 16:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.164 16:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.164 16:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.164 16:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.423 16:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.423 16:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:01.423 { 00:12:01.423 "cntlid": 129, 00:12:01.423 "qid": 0, 00:12:01.423 "state": "enabled", 00:12:01.423 "thread": "nvmf_tgt_poll_group_000", 00:12:01.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:12:01.423 "listen_address": { 00:12:01.423 "trtype": "TCP", 00:12:01.423 "adrfam": "IPv4", 00:12:01.423 "traddr": "10.0.0.3", 00:12:01.423 "trsvcid": "4420" 00:12:01.423 }, 00:12:01.423 "peer_address": { 00:12:01.423 "trtype": "TCP", 00:12:01.423 "adrfam": "IPv4", 00:12:01.423 "traddr": "10.0.0.1", 00:12:01.423 "trsvcid": "37752" 00:12:01.423 }, 00:12:01.423 "auth": { 00:12:01.423 "state": "completed", 00:12:01.423 "digest": "sha512", 00:12:01.423 "dhgroup": "ffdhe6144" 00:12:01.423 } 00:12:01.423 } 00:12:01.423 ]' 00:12:01.423 16:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:01.423 16:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:01.423 16:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:01.423 16:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:01.423 16:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:01.423 16:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.423 16:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.423 16:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.683 16:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:12:01.683 16:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:12:02.620 16:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.620 16:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:12:02.620 16:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.620 16:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.620 16:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.620 16:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:02.620 16:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:02.620 16:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:02.880 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:12:02.880 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:02.880 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:02.880 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:02.880 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:02.880 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.880 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.880 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.880 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.880 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.880 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.880 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.880 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.140 00:12:03.140 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:03.140 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:03.140 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.705 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.705 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.705 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.705 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.705 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.705 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:03.705 { 00:12:03.705 "cntlid": 131, 00:12:03.705 "qid": 0, 00:12:03.705 "state": "enabled", 00:12:03.705 "thread": "nvmf_tgt_poll_group_000", 00:12:03.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:12:03.705 "listen_address": { 00:12:03.705 "trtype": "TCP", 00:12:03.705 "adrfam": "IPv4", 00:12:03.705 "traddr": "10.0.0.3", 00:12:03.706 "trsvcid": "4420" 00:12:03.706 }, 00:12:03.706 "peer_address": { 00:12:03.706 "trtype": "TCP", 00:12:03.706 "adrfam": "IPv4", 00:12:03.706 "traddr": "10.0.0.1", 00:12:03.706 "trsvcid": "37778" 00:12:03.706 }, 00:12:03.706 "auth": { 00:12:03.706 "state": "completed", 00:12:03.706 "digest": "sha512", 00:12:03.706 "dhgroup": "ffdhe6144" 00:12:03.706 } 00:12:03.706 } 00:12:03.706 ]' 00:12:03.706 16:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:03.706 16:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:03.706 16:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:03.706 16:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:03.706 16:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:03.706 16:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.706 16:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.706 16:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.964 16:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:12:03.964 16:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:12:04.532 16:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.532 16:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:12:04.532 16:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.532 16:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.532 16:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.532 16:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:04.532 16:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:04.532 16:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:05.100 16:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:12:05.100 16:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.100 16:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:05.100 16:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:05.100 16:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:05.100 16:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.100 16:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.100 16:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.100 16:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.100 16:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.100 16:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.100 16:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.100 16:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.359 00:12:05.359 16:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:05.359 16:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.359 16:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:05.618 16:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.618 16:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.618 16:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.618 16:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.618 16:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.618 16:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:05.618 { 00:12:05.618 "cntlid": 133, 00:12:05.618 "qid": 0, 00:12:05.618 "state": "enabled", 00:12:05.618 "thread": "nvmf_tgt_poll_group_000", 00:12:05.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:12:05.618 "listen_address": { 00:12:05.618 "trtype": "TCP", 00:12:05.618 "adrfam": "IPv4", 00:12:05.618 "traddr": "10.0.0.3", 00:12:05.618 "trsvcid": "4420" 00:12:05.618 }, 00:12:05.618 "peer_address": { 00:12:05.618 "trtype": "TCP", 00:12:05.618 "adrfam": "IPv4", 00:12:05.618 "traddr": "10.0.0.1", 00:12:05.618 "trsvcid": "37804" 00:12:05.618 }, 00:12:05.618 "auth": { 00:12:05.618 "state": "completed", 00:12:05.618 "digest": "sha512", 00:12:05.618 "dhgroup": "ffdhe6144" 00:12:05.618 } 00:12:05.618 } 00:12:05.618 ]' 00:12:05.618 16:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:05.877 16:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:05.877 16:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:05.877 16:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:05.877 16:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:05.877 16:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.877 16:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.877 16:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.136 16:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:12:06.136 16:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:12:06.704 16:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.704 16:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:12:06.704 16:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.704 16:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.704 16:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.704 16:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:06.704 16:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:06.704 16:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:06.963 16:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:12:06.963 16:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:06.963 16:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:06.963 16:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:06.963 16:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:06.963 16:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.963 16:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key3 00:12:06.963 16:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.963 16:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.963 16:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.963 16:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:06.963 16:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:06.963 16:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:07.530 00:12:07.530 16:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:07.531 16:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:07.531 16:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.531 16:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.531 16:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.531 16:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.531 16:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.531 16:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.531 16:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:07.531 { 00:12:07.531 "cntlid": 135, 00:12:07.531 "qid": 0, 00:12:07.531 "state": "enabled", 00:12:07.531 "thread": "nvmf_tgt_poll_group_000", 00:12:07.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:12:07.531 "listen_address": { 00:12:07.531 "trtype": "TCP", 00:12:07.531 "adrfam": "IPv4", 00:12:07.531 "traddr": "10.0.0.3", 00:12:07.531 "trsvcid": "4420" 00:12:07.531 }, 00:12:07.531 "peer_address": { 00:12:07.531 "trtype": "TCP", 00:12:07.531 "adrfam": "IPv4", 00:12:07.531 "traddr": "10.0.0.1", 00:12:07.531 "trsvcid": "37836" 00:12:07.531 }, 00:12:07.531 "auth": { 00:12:07.531 "state": "completed", 00:12:07.531 "digest": "sha512", 00:12:07.531 "dhgroup": "ffdhe6144" 00:12:07.531 } 00:12:07.531 } 00:12:07.531 ]' 00:12:07.790 16:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:07.790 16:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:07.790 16:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:07.790 16:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:07.790 16:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:07.790 16:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.790 16:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.790 16:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.049 16:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:12:08.049 16:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:12:08.618 16:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.618 16:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:12:08.618 16:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.618 16:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.618 16:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.618 16:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:08.618 16:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:08.618 16:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:08.618 16:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:09.186 16:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:12:09.186 16:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.186 16:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:09.186 16:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:09.186 16:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:09.186 16:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.186 16:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.186 16:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.186 16:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.186 16:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.186 16:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.186 16:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.186 16:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.754 00:12:09.754 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:09.754 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:09.754 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.013 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.013 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.013 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.013 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.013 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.013 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.013 { 00:12:10.013 "cntlid": 137, 00:12:10.013 "qid": 0, 00:12:10.013 "state": "enabled", 00:12:10.013 "thread": "nvmf_tgt_poll_group_000", 00:12:10.013 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:12:10.013 "listen_address": { 00:12:10.013 "trtype": "TCP", 00:12:10.013 "adrfam": "IPv4", 00:12:10.013 "traddr": "10.0.0.3", 00:12:10.013 "trsvcid": "4420" 00:12:10.013 }, 00:12:10.013 "peer_address": { 00:12:10.013 "trtype": "TCP", 00:12:10.013 "adrfam": "IPv4", 00:12:10.013 "traddr": "10.0.0.1", 00:12:10.013 "trsvcid": "37868" 00:12:10.013 }, 00:12:10.013 "auth": { 00:12:10.013 "state": "completed", 00:12:10.013 "digest": "sha512", 00:12:10.013 "dhgroup": "ffdhe8192" 00:12:10.013 } 00:12:10.013 } 00:12:10.013 ]' 00:12:10.013 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.013 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:10.013 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.013 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:10.013 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.013 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.013 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.013 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.272 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:12:10.272 16:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:12:11.208 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.208 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:12:11.208 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.208 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.208 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.208 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.208 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:11.208 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:11.208 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:12:11.208 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.208 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:11.208 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:11.208 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:11.208 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.208 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.208 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.208 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.208 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.208 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.208 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.208 16:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.776 00:12:12.036 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:12.036 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:12.036 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.295 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.295 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.295 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.295 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.295 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.295 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.295 { 00:12:12.295 "cntlid": 139, 00:12:12.295 "qid": 0, 00:12:12.295 "state": "enabled", 00:12:12.295 "thread": "nvmf_tgt_poll_group_000", 00:12:12.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:12:12.295 "listen_address": { 00:12:12.295 "trtype": "TCP", 00:12:12.295 "adrfam": "IPv4", 00:12:12.295 "traddr": "10.0.0.3", 00:12:12.295 "trsvcid": "4420" 00:12:12.295 }, 00:12:12.295 "peer_address": { 00:12:12.295 "trtype": "TCP", 00:12:12.295 "adrfam": "IPv4", 00:12:12.295 "traddr": "10.0.0.1", 00:12:12.295 "trsvcid": "56434" 00:12:12.295 }, 00:12:12.295 "auth": { 00:12:12.295 "state": "completed", 00:12:12.295 "digest": "sha512", 00:12:12.295 "dhgroup": "ffdhe8192" 00:12:12.295 } 00:12:12.295 } 00:12:12.295 ]' 00:12:12.295 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.295 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:12.295 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.295 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:12.295 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.295 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.295 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.295 16:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.555 16:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:12:12.555 16:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: --dhchap-ctrl-secret DHHC-1:02:Zjg5MGVkMGQ4MDc0MmEzMjlhYmRhYzNlNzFjNzBkZWNiYWJhN2VkZjYyMzJjMTdls9jlsA==: 00:12:13.492 16:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.492 16:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:12:13.492 16:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.492 16:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.492 16:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.492 16:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.492 16:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:13.492 16:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:13.492 16:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:12:13.492 16:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.492 16:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:13.492 16:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:13.492 16:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:13.492 16:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.492 16:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.492 16:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.492 16:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.492 16:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.492 16:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.492 16:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.493 16:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.135 00:12:14.135 16:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.135 16:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.135 16:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.394 16:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.394 16:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.394 16:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.394 16:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.394 16:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.394 16:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.394 { 00:12:14.394 "cntlid": 141, 00:12:14.394 "qid": 0, 00:12:14.394 "state": "enabled", 00:12:14.394 "thread": "nvmf_tgt_poll_group_000", 00:12:14.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:12:14.395 "listen_address": { 00:12:14.395 "trtype": "TCP", 00:12:14.395 "adrfam": "IPv4", 00:12:14.395 "traddr": "10.0.0.3", 00:12:14.395 "trsvcid": "4420" 00:12:14.395 }, 00:12:14.395 "peer_address": { 00:12:14.395 "trtype": "TCP", 00:12:14.395 "adrfam": "IPv4", 00:12:14.395 "traddr": "10.0.0.1", 00:12:14.395 "trsvcid": "56454" 00:12:14.395 }, 00:12:14.395 "auth": { 00:12:14.395 "state": "completed", 00:12:14.395 "digest": "sha512", 00:12:14.395 "dhgroup": "ffdhe8192" 00:12:14.395 } 00:12:14.395 } 00:12:14.395 ]' 00:12:14.395 16:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.653 16:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:14.653 16:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.653 16:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:14.653 16:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.653 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.653 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.653 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.912 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:12:14.912 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:01:NjhkMzQ3MWRjZDBiYTM3MDMzZGZmMzExMGM2OWIwNjIKG1Uo: 00:12:15.480 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.480 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:12:15.480 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.480 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.480 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.480 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.480 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:15.480 16:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:15.739 16:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:12:15.739 16:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:15.739 16:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:15.739 16:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:15.739 16:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:15.739 16:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.739 16:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key3 00:12:15.739 16:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.739 16:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.739 16:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.739 16:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:15.739 16:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:15.739 16:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:16.307 00:12:16.307 16:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.307 16:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.307 16:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.876 16:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.876 16:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.876 16:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.876 16:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.876 16:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.876 16:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.876 { 00:12:16.876 "cntlid": 143, 00:12:16.876 "qid": 0, 00:12:16.876 "state": "enabled", 00:12:16.876 "thread": "nvmf_tgt_poll_group_000", 00:12:16.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:12:16.876 "listen_address": { 00:12:16.876 "trtype": "TCP", 00:12:16.876 "adrfam": "IPv4", 00:12:16.876 "traddr": "10.0.0.3", 00:12:16.876 "trsvcid": "4420" 00:12:16.876 }, 00:12:16.876 "peer_address": { 00:12:16.876 "trtype": "TCP", 00:12:16.876 "adrfam": "IPv4", 00:12:16.876 "traddr": "10.0.0.1", 00:12:16.876 "trsvcid": "56474" 00:12:16.876 }, 00:12:16.876 "auth": { 00:12:16.876 "state": "completed", 00:12:16.876 "digest": "sha512", 00:12:16.876 "dhgroup": "ffdhe8192" 00:12:16.876 } 00:12:16.876 } 00:12:16.876 ]' 00:12:16.876 16:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.876 16:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:16.876 16:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.876 16:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:16.876 16:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:16.876 16:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.876 16:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.876 16:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.135 16:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:12:17.135 16:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:12:17.704 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.704 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:12:17.704 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.704 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.704 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.704 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:17.704 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:12:17.704 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:17.704 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:17.704 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:17.704 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:17.964 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:12:17.964 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:17.964 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:17.964 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:17.964 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:17.964 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.964 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.964 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.964 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.964 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.964 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.964 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.964 16:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.532 00:12:18.790 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.790 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:18.790 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.049 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.049 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.049 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.050 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.050 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.050 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:19.050 { 00:12:19.050 "cntlid": 145, 00:12:19.050 "qid": 0, 00:12:19.050 "state": "enabled", 00:12:19.050 "thread": "nvmf_tgt_poll_group_000", 00:12:19.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:12:19.050 "listen_address": { 00:12:19.050 "trtype": "TCP", 00:12:19.050 "adrfam": "IPv4", 00:12:19.050 "traddr": "10.0.0.3", 00:12:19.050 "trsvcid": "4420" 00:12:19.050 }, 00:12:19.050 "peer_address": { 00:12:19.050 "trtype": "TCP", 00:12:19.050 "adrfam": "IPv4", 00:12:19.050 "traddr": "10.0.0.1", 00:12:19.050 "trsvcid": "56504" 00:12:19.050 }, 00:12:19.050 "auth": { 00:12:19.050 "state": "completed", 00:12:19.050 "digest": "sha512", 00:12:19.050 "dhgroup": "ffdhe8192" 00:12:19.050 } 00:12:19.050 } 00:12:19.050 ]' 00:12:19.050 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:19.050 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:19.050 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:19.050 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:19.050 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:19.050 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.050 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.050 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.309 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:12:19.309 16:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:00:MGZmNTMyZThlZDhjNDM2OTcwMGQzNDIwZjhlODQzNGJiMWM4ZDQ1YjUwZmJhYzhh375W3A==: --dhchap-ctrl-secret DHHC-1:03:Y2FhYjZkMjBlMWFlNmQ2NjdiMTU5NTIxZWZmMThhOGU3NzMzZTg0YmZhMGYxNjE5MDY0MWE5YTRiYmY4ZTc1NeNNGjI=: 00:12:19.877 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.877 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:12:19.877 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.877 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.877 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.877 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key1 00:12:19.877 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.877 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.878 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.878 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:12:19.878 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:19.878 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:12:19.878 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:19.878 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.878 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:19.878 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.878 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:12:19.878 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:19.878 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:20.446 request: 00:12:20.446 { 00:12:20.446 "name": "nvme0", 00:12:20.446 "trtype": "tcp", 00:12:20.446 "traddr": "10.0.0.3", 00:12:20.446 "adrfam": "ipv4", 00:12:20.446 "trsvcid": "4420", 00:12:20.446 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:20.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:12:20.446 "prchk_reftag": false, 00:12:20.446 "prchk_guard": false, 00:12:20.446 "hdgst": false, 00:12:20.446 "ddgst": false, 00:12:20.446 "dhchap_key": "key2", 00:12:20.446 "allow_unrecognized_csi": false, 00:12:20.446 "method": "bdev_nvme_attach_controller", 00:12:20.446 "req_id": 1 00:12:20.446 } 00:12:20.446 Got JSON-RPC error response 00:12:20.446 response: 00:12:20.446 { 00:12:20.446 "code": -5, 00:12:20.446 "message": "Input/output error" 00:12:20.446 } 00:12:20.446 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:20.447 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:20.447 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:20.447 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:20.447 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:12:20.447 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.447 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.447 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.447 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.447 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.447 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.447 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.447 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:20.447 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:20.447 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:20.447 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:20.447 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:20.447 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:20.447 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:20.447 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:20.447 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:20.447 16:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:21.015 request: 00:12:21.015 { 00:12:21.015 "name": "nvme0", 00:12:21.015 "trtype": "tcp", 00:12:21.015 "traddr": "10.0.0.3", 00:12:21.015 "adrfam": "ipv4", 00:12:21.015 "trsvcid": "4420", 00:12:21.015 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:21.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:12:21.015 "prchk_reftag": false, 00:12:21.015 "prchk_guard": false, 00:12:21.015 "hdgst": false, 00:12:21.015 "ddgst": false, 00:12:21.015 "dhchap_key": "key1", 00:12:21.015 "dhchap_ctrlr_key": "ckey2", 00:12:21.015 "allow_unrecognized_csi": false, 00:12:21.015 "method": "bdev_nvme_attach_controller", 00:12:21.015 "req_id": 1 00:12:21.015 } 00:12:21.015 Got JSON-RPC error response 00:12:21.015 response: 00:12:21.015 { 00:12:21.015 "code": -5, 00:12:21.015 "message": "Input/output error" 00:12:21.015 } 00:12:21.015 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:21.015 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:21.015 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:21.015 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:21.015 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:12:21.015 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.015 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.015 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.015 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key1 00:12:21.015 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.015 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.015 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.015 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.015 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:21.015 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.015 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:21.015 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:21.015 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:21.015 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:21.015 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.015 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.015 16:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.673 request: 00:12:21.673 { 00:12:21.673 "name": "nvme0", 00:12:21.673 "trtype": "tcp", 00:12:21.673 "traddr": "10.0.0.3", 00:12:21.673 "adrfam": "ipv4", 00:12:21.673 "trsvcid": "4420", 00:12:21.673 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:21.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:12:21.673 "prchk_reftag": false, 00:12:21.673 "prchk_guard": false, 00:12:21.673 "hdgst": false, 00:12:21.673 "ddgst": false, 00:12:21.673 "dhchap_key": "key1", 00:12:21.673 "dhchap_ctrlr_key": "ckey1", 00:12:21.673 "allow_unrecognized_csi": false, 00:12:21.673 "method": "bdev_nvme_attach_controller", 00:12:21.673 "req_id": 1 00:12:21.673 } 00:12:21.673 Got JSON-RPC error response 00:12:21.673 response: 00:12:21.673 { 00:12:21.673 "code": -5, 00:12:21.673 "message": "Input/output error" 00:12:21.673 } 00:12:21.673 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:21.673 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:21.673 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:21.673 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:21.673 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:12:21.673 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.673 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.673 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.673 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67259 00:12:21.673 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67259 ']' 00:12:21.673 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67259 00:12:21.673 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:12:21.673 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:21.673 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67259 00:12:21.932 killing process with pid 67259 00:12:21.932 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:21.932 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:21.932 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67259' 00:12:21.932 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67259 00:12:21.932 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67259 00:12:21.932 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:21.932 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:21.932 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:21.932 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.932 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70351 00:12:21.932 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:21.932 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70351 00:12:21.932 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70351 ']' 00:12:21.932 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.932 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.932 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.932 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.932 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.502 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.502 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:22.502 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:22.502 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:22.502 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.502 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.502 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:22.502 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70351 00:12:22.502 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70351 ']' 00:12:22.502 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.502 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:22.502 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.502 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:22.502 16:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.502 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.502 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:22.502 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:12:22.502 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.502 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.762 null0 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.qRs 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.iTT ]] 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iTT 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Cgx 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.57Z ]] 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.57Z 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.fc6 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.VGi ]] 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.VGi 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.GBG 00:12:22.762 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.763 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.763 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.763 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:12:22.763 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:12:22.763 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:22.763 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:22.763 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:22.763 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:22.763 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.763 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key3 00:12:22.763 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.763 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.763 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.763 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:22.763 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:22.763 16:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:23.701 nvme0n1 00:12:23.701 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.701 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:23.701 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.960 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.960 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.960 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.960 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.960 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.219 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:24.219 { 00:12:24.219 "cntlid": 1, 00:12:24.219 "qid": 0, 00:12:24.219 "state": "enabled", 00:12:24.219 "thread": "nvmf_tgt_poll_group_000", 00:12:24.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:12:24.219 "listen_address": { 00:12:24.219 "trtype": "TCP", 00:12:24.219 "adrfam": "IPv4", 00:12:24.219 "traddr": "10.0.0.3", 00:12:24.219 "trsvcid": "4420" 00:12:24.219 }, 00:12:24.219 "peer_address": { 00:12:24.219 "trtype": "TCP", 00:12:24.219 "adrfam": "IPv4", 00:12:24.219 "traddr": "10.0.0.1", 00:12:24.219 "trsvcid": "57476" 00:12:24.219 }, 00:12:24.219 "auth": { 00:12:24.219 "state": "completed", 00:12:24.219 "digest": "sha512", 00:12:24.219 "dhgroup": "ffdhe8192" 00:12:24.219 } 00:12:24.219 } 00:12:24.219 ]' 00:12:24.219 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:24.219 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:24.219 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:24.219 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:24.219 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:24.219 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.219 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.219 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.479 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:12:24.479 16:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:12:25.047 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.047 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:12:25.047 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.047 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.047 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.306 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key3 00:12:25.306 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.306 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.306 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.306 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:25.306 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:25.566 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:25.566 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:25.566 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:25.566 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:25.566 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:25.566 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:25.566 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:25.566 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:25.566 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:25.566 16:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:25.826 request: 00:12:25.826 { 00:12:25.826 "name": "nvme0", 00:12:25.826 "trtype": "tcp", 00:12:25.826 "traddr": "10.0.0.3", 00:12:25.826 "adrfam": "ipv4", 00:12:25.826 "trsvcid": "4420", 00:12:25.826 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:25.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:12:25.826 "prchk_reftag": false, 00:12:25.826 "prchk_guard": false, 00:12:25.826 "hdgst": false, 00:12:25.826 "ddgst": false, 00:12:25.826 "dhchap_key": "key3", 00:12:25.826 "allow_unrecognized_csi": false, 00:12:25.826 "method": "bdev_nvme_attach_controller", 00:12:25.826 "req_id": 1 00:12:25.826 } 00:12:25.826 Got JSON-RPC error response 00:12:25.826 response: 00:12:25.826 { 00:12:25.826 "code": -5, 00:12:25.826 "message": "Input/output error" 00:12:25.826 } 00:12:25.826 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:25.826 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:25.826 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:25.826 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:25.826 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:12:25.826 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:12:25.826 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:25.826 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:26.085 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:26.085 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:26.085 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:26.085 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:26.085 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:26.085 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:26.085 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:26.085 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:26.085 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:26.085 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:26.345 request: 00:12:26.345 { 00:12:26.345 "name": "nvme0", 00:12:26.345 "trtype": "tcp", 00:12:26.345 "traddr": "10.0.0.3", 00:12:26.345 "adrfam": "ipv4", 00:12:26.345 "trsvcid": "4420", 00:12:26.345 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:26.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:12:26.345 "prchk_reftag": false, 00:12:26.345 "prchk_guard": false, 00:12:26.345 "hdgst": false, 00:12:26.345 "ddgst": false, 00:12:26.345 "dhchap_key": "key3", 00:12:26.345 "allow_unrecognized_csi": false, 00:12:26.345 "method": "bdev_nvme_attach_controller", 00:12:26.345 "req_id": 1 00:12:26.345 } 00:12:26.345 Got JSON-RPC error response 00:12:26.345 response: 00:12:26.345 { 00:12:26.345 "code": -5, 00:12:26.345 "message": "Input/output error" 00:12:26.345 } 00:12:26.345 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:26.345 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:26.345 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:26.345 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:26.345 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:26.345 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:12:26.345 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:26.345 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:26.345 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:26.345 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:26.604 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:12:26.604 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.604 16:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.604 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.604 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:12:26.604 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.604 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.604 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.604 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:26.604 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:26.604 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:26.605 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:26.605 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:26.605 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:26.605 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:26.605 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:26.605 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:26.605 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:27.172 request: 00:12:27.172 { 00:12:27.172 "name": "nvme0", 00:12:27.172 "trtype": "tcp", 00:12:27.172 "traddr": "10.0.0.3", 00:12:27.172 "adrfam": "ipv4", 00:12:27.172 "trsvcid": "4420", 00:12:27.172 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:27.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:12:27.172 "prchk_reftag": false, 00:12:27.172 "prchk_guard": false, 00:12:27.172 "hdgst": false, 00:12:27.172 "ddgst": false, 00:12:27.172 "dhchap_key": "key0", 00:12:27.172 "dhchap_ctrlr_key": "key1", 00:12:27.172 "allow_unrecognized_csi": false, 00:12:27.172 "method": "bdev_nvme_attach_controller", 00:12:27.172 "req_id": 1 00:12:27.172 } 00:12:27.172 Got JSON-RPC error response 00:12:27.172 response: 00:12:27.172 { 00:12:27.172 "code": -5, 00:12:27.172 "message": "Input/output error" 00:12:27.172 } 00:12:27.172 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:27.172 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:27.172 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:27.172 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:27.172 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:12:27.172 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:27.172 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:27.431 nvme0n1 00:12:27.431 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:12:27.431 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.431 16:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:12:27.690 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.690 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.690 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.949 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key1 00:12:27.949 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.949 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.949 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.949 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:27.949 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:27.949 16:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:28.887 nvme0n1 00:12:28.887 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:12:28.887 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:12:28.887 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.145 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.145 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:29.145 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.145 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.145 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.145 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:12:29.145 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:12:29.145 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.713 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.713 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:12:29.713 16:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid 52629f53-148b-4998-bf25-3a0243614ae8 -l 0 --dhchap-secret DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: --dhchap-ctrl-secret DHHC-1:03:NTFmODM0MGM3ZjBlYWZkNTQ4ZDM0ZTk3YjY2NzkzOTEyZTlmOTEyOWE4OTYzYTNlMGNmY2RlNGU3OTk2MzM1ZuqWR1o=: 00:12:30.281 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:12:30.281 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:12:30.281 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:12:30.281 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:12:30.281 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:12:30.281 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:12:30.281 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:12:30.281 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.281 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.540 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:12:30.540 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:30.540 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:12:30.540 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:30.540 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:30.540 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:30.540 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:30.540 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:30.540 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:30.540 16:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:31.146 request: 00:12:31.146 { 00:12:31.146 "name": "nvme0", 00:12:31.146 "trtype": "tcp", 00:12:31.146 "traddr": "10.0.0.3", 00:12:31.146 "adrfam": "ipv4", 00:12:31.146 "trsvcid": "4420", 00:12:31.146 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:31.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8", 00:12:31.146 "prchk_reftag": false, 00:12:31.146 "prchk_guard": false, 00:12:31.146 "hdgst": false, 00:12:31.146 "ddgst": false, 00:12:31.146 "dhchap_key": "key1", 00:12:31.146 "allow_unrecognized_csi": false, 00:12:31.146 "method": "bdev_nvme_attach_controller", 00:12:31.146 "req_id": 1 00:12:31.146 } 00:12:31.146 Got JSON-RPC error response 00:12:31.146 response: 00:12:31.146 { 00:12:31.146 "code": -5, 00:12:31.146 "message": "Input/output error" 00:12:31.146 } 00:12:31.146 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:31.146 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:31.146 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:31.146 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:31.146 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:31.146 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:31.146 16:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:32.081 nvme0n1 00:12:32.081 16:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:12:32.081 16:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.081 16:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:12:32.339 16:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.339 16:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.339 16:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.597 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:12:32.597 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.597 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.597 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.597 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:12:32.597 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:32.597 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:32.855 nvme0n1 00:12:32.855 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:12:32.855 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.855 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:12:33.114 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.114 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.114 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.372 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:33.372 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.372 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.372 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.372 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: '' 2s 00:12:33.372 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:33.372 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:33.372 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: 00:12:33.372 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:12:33.372 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:33.372 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:33.372 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: ]] 00:12:33.372 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YmIwODE5MWM4NTJkMTVlZGFhNGFmNzE2NDRmZmJlY2HR8umH: 00:12:33.372 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:12:33.372 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:33.372 16:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:35.901 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:12:35.901 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:12:35.901 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:12:35.901 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:12:35.901 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:12:35.901 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:12:35.901 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:12:35.901 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key1 --dhchap-ctrlr-key key2 00:12:35.901 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.901 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.901 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.901 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: 2s 00:12:35.901 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:35.901 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:35.901 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:12:35.901 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: 00:12:35.901 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:35.901 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:35.901 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:12:35.901 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: ]] 00:12:35.901 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MTgwYTFmYzNlOGJmNTRhOWU3NDEzZTYzMGQzNGRkZTk1MDg5MWVkNzZiNTNjNmE3/r+D2A==: 00:12:35.901 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:35.901 16:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:37.806 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:12:37.806 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:12:37.806 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:12:37.806 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:12:37.806 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:12:37.806 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:12:37.806 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:12:37.806 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.806 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:37.807 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.807 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.807 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.807 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:37.807 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:37.807 16:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:38.375 nvme0n1 00:12:38.375 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:38.375 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.375 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.375 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.375 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:38.375 16:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:38.943 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:12:38.943 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.943 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:12:39.202 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.202 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:12:39.202 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.202 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.202 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.202 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:12:39.202 16:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:12:39.770 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:12:39.770 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:12:39.770 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.770 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.770 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:39.770 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.770 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.770 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.770 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:39.770 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:39.770 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:39.770 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:12:39.770 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:39.770 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:12:39.770 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:39.770 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:39.770 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:40.338 request: 00:12:40.338 { 00:12:40.338 "name": "nvme0", 00:12:40.338 "dhchap_key": "key1", 00:12:40.338 "dhchap_ctrlr_key": "key3", 00:12:40.338 "method": "bdev_nvme_set_keys", 00:12:40.338 "req_id": 1 00:12:40.338 } 00:12:40.338 Got JSON-RPC error response 00:12:40.338 response: 00:12:40.338 { 00:12:40.338 "code": -13, 00:12:40.338 "message": "Permission denied" 00:12:40.338 } 00:12:40.338 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:40.338 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:40.338 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:40.338 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:40.338 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:40.338 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:40.597 16:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.857 16:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:12:40.857 16:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:12:41.803 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:41.803 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:41.803 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.073 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:12:42.073 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:42.073 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.073 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.073 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.073 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:42.073 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:42.073 16:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:43.010 nvme0n1 00:12:43.010 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:43.010 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.010 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.010 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.010 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:43.010 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:43.010 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:43.010 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:12:43.010 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:43.010 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:12:43.010 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:43.010 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:43.010 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:43.577 request: 00:12:43.577 { 00:12:43.577 "name": "nvme0", 00:12:43.577 "dhchap_key": "key2", 00:12:43.577 "dhchap_ctrlr_key": "key0", 00:12:43.577 "method": "bdev_nvme_set_keys", 00:12:43.577 "req_id": 1 00:12:43.577 } 00:12:43.577 Got JSON-RPC error response 00:12:43.577 response: 00:12:43.577 { 00:12:43.577 "code": -13, 00:12:43.577 "message": "Permission denied" 00:12:43.577 } 00:12:43.577 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:43.577 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:43.577 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:43.577 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:43.577 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:43.577 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:43.577 16:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.836 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:12:43.836 16:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:12:44.773 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:44.773 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:44.773 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.033 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:12:45.033 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:12:45.033 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:12:45.033 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67289 00:12:45.033 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67289 ']' 00:12:45.033 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67289 00:12:45.033 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:12:45.033 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.033 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67289 00:12:45.293 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:45.293 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:45.293 killing process with pid 67289 00:12:45.293 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67289' 00:12:45.293 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67289 00:12:45.293 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67289 00:12:45.552 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:45.552 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:45.552 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:12:45.552 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:45.552 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:12:45.553 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:45.553 16:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:45.553 rmmod nvme_tcp 00:12:45.553 rmmod nvme_fabrics 00:12:45.553 rmmod nvme_keyring 00:12:45.553 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:45.553 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:12:45.553 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:12:45.553 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70351 ']' 00:12:45.553 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70351 00:12:45.553 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70351 ']' 00:12:45.553 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70351 00:12:45.553 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:12:45.553 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.553 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70351 00:12:45.553 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:45.553 killing process with pid 70351 00:12:45.553 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:45.553 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70351' 00:12:45.553 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70351 00:12:45.553 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70351 00:12:45.812 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:45.812 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:45.812 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:45.812 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:12:45.812 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:12:45.812 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:45.812 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:45.812 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:45.812 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:45.812 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:45.812 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:45.812 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:46.071 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:46.071 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:46.071 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:46.071 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:46.071 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:46.071 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:46.071 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:46.071 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:46.071 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:46.071 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:46.071 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:46.071 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.071 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.071 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.071 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:12:46.071 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.qRs /tmp/spdk.key-sha256.Cgx /tmp/spdk.key-sha384.fc6 /tmp/spdk.key-sha512.GBG /tmp/spdk.key-sha512.iTT /tmp/spdk.key-sha384.57Z /tmp/spdk.key-sha256.VGi '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:12:46.071 00:12:46.071 real 3m11.124s 00:12:46.071 user 7m37.220s 00:12:46.071 sys 0m29.913s 00:12:46.071 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:46.071 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.071 ************************************ 00:12:46.071 END TEST nvmf_auth_target 00:12:46.071 ************************************ 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:46.331 ************************************ 00:12:46.331 START TEST nvmf_bdevio_no_huge 00:12:46.331 ************************************ 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:46.331 * Looking for test storage... 00:12:46.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:12:46.331 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:46.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.332 --rc genhtml_branch_coverage=1 00:12:46.332 --rc genhtml_function_coverage=1 00:12:46.332 --rc genhtml_legend=1 00:12:46.332 --rc geninfo_all_blocks=1 00:12:46.332 --rc geninfo_unexecuted_blocks=1 00:12:46.332 00:12:46.332 ' 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:46.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.332 --rc genhtml_branch_coverage=1 00:12:46.332 --rc genhtml_function_coverage=1 00:12:46.332 --rc genhtml_legend=1 00:12:46.332 --rc geninfo_all_blocks=1 00:12:46.332 --rc geninfo_unexecuted_blocks=1 00:12:46.332 00:12:46.332 ' 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:46.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.332 --rc genhtml_branch_coverage=1 00:12:46.332 --rc genhtml_function_coverage=1 00:12:46.332 --rc genhtml_legend=1 00:12:46.332 --rc geninfo_all_blocks=1 00:12:46.332 --rc geninfo_unexecuted_blocks=1 00:12:46.332 00:12:46.332 ' 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:46.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.332 --rc genhtml_branch_coverage=1 00:12:46.332 --rc genhtml_function_coverage=1 00:12:46.332 --rc genhtml_legend=1 00:12:46.332 --rc geninfo_all_blocks=1 00:12:46.332 --rc geninfo_unexecuted_blocks=1 00:12:46.332 00:12:46.332 ' 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:46.332 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:46.332 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:46.333 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.333 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:46.333 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:46.333 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:46.333 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:46.333 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:46.333 Cannot find device "nvmf_init_br" 00:12:46.333 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:12:46.333 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:46.333 Cannot find device "nvmf_init_br2" 00:12:46.333 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:12:46.333 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:46.333 Cannot find device "nvmf_tgt_br" 00:12:46.333 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:12:46.333 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:46.593 Cannot find device "nvmf_tgt_br2" 00:12:46.593 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:12:46.593 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:46.593 Cannot find device "nvmf_init_br" 00:12:46.593 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:12:46.593 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:46.593 Cannot find device "nvmf_init_br2" 00:12:46.593 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:12:46.593 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:46.593 Cannot find device "nvmf_tgt_br" 00:12:46.593 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:12:46.593 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:46.593 Cannot find device "nvmf_tgt_br2" 00:12:46.593 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:12:46.593 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:46.593 Cannot find device "nvmf_br" 00:12:46.593 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:12:46.593 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:46.593 Cannot find device "nvmf_init_if" 00:12:46.593 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:12:46.593 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:46.593 Cannot find device "nvmf_init_if2" 00:12:46.593 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:12:46.593 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:46.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:46.593 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:12:46.593 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:46.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:46.593 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:12:46.593 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:46.593 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:46.593 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:46.593 16:58:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:46.593 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:46.593 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:46.593 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:46.593 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:46.593 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:46.593 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:46.593 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:46.593 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:46.593 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:46.593 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:46.593 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:46.593 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:46.593 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:46.593 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:46.593 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:46.593 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:46.593 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:46.593 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:46.593 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:46.853 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:46.853 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:12:46.853 00:12:46.853 --- 10.0.0.3 ping statistics --- 00:12:46.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.853 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:46.853 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:46.853 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:12:46.853 00:12:46.853 --- 10.0.0.4 ping statistics --- 00:12:46.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.853 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:46.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:46.853 00:12:46.853 --- 10.0.0.1 ping statistics --- 00:12:46.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.853 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:46.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:12:46.853 00:12:46.853 --- 10.0.0.2 ping statistics --- 00:12:46.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.853 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=70975 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 70975 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 70975 ']' 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:46.853 16:58:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:46.853 [2024-11-20 16:58:05.290601] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:12:46.853 [2024-11-20 16:58:05.290709] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:47.112 [2024-11-20 16:58:05.457603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:47.112 [2024-11-20 16:58:05.538405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.112 [2024-11-20 16:58:05.538475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.112 [2024-11-20 16:58:05.538490] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.112 [2024-11-20 16:58:05.538502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.112 [2024-11-20 16:58:05.538511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.112 [2024-11-20 16:58:05.539603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:47.112 [2024-11-20 16:58:05.539725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:47.112 [2024-11-20 16:58:05.539900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:47.112 [2024-11-20 16:58:05.539909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:47.112 [2024-11-20 16:58:05.545984] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:48.049 [2024-11-20 16:58:06.365323] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:48.049 Malloc0 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:48.049 [2024-11-20 16:58:06.405512] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:48.049 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:48.049 { 00:12:48.049 "params": { 00:12:48.049 "name": "Nvme$subsystem", 00:12:48.050 "trtype": "$TEST_TRANSPORT", 00:12:48.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:48.050 "adrfam": "ipv4", 00:12:48.050 "trsvcid": "$NVMF_PORT", 00:12:48.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:48.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:48.050 "hdgst": ${hdgst:-false}, 00:12:48.050 "ddgst": ${ddgst:-false} 00:12:48.050 }, 00:12:48.050 "method": "bdev_nvme_attach_controller" 00:12:48.050 } 00:12:48.050 EOF 00:12:48.050 )") 00:12:48.050 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:12:48.050 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:12:48.050 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:12:48.050 16:58:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:48.050 "params": { 00:12:48.050 "name": "Nvme1", 00:12:48.050 "trtype": "tcp", 00:12:48.050 "traddr": "10.0.0.3", 00:12:48.050 "adrfam": "ipv4", 00:12:48.050 "trsvcid": "4420", 00:12:48.050 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:48.050 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:48.050 "hdgst": false, 00:12:48.050 "ddgst": false 00:12:48.050 }, 00:12:48.050 "method": "bdev_nvme_attach_controller" 00:12:48.050 }' 00:12:48.050 [2024-11-20 16:58:06.467039] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:12:48.050 [2024-11-20 16:58:06.467148] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71017 ] 00:12:48.308 [2024-11-20 16:58:06.630189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:48.308 [2024-11-20 16:58:06.713448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.308 [2024-11-20 16:58:06.713595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.308 [2024-11-20 16:58:06.713822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.308 [2024-11-20 16:58:06.728062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:48.567 I/O targets: 00:12:48.567 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:48.567 00:12:48.567 00:12:48.567 CUnit - A unit testing framework for C - Version 2.1-3 00:12:48.567 http://cunit.sourceforge.net/ 00:12:48.567 00:12:48.567 00:12:48.567 Suite: bdevio tests on: Nvme1n1 00:12:48.567 Test: blockdev write read block ...passed 00:12:48.567 Test: blockdev write zeroes read block ...passed 00:12:48.567 Test: blockdev write zeroes read no split ...passed 00:12:48.567 Test: blockdev write zeroes read split ...passed 00:12:48.567 Test: blockdev write zeroes read split partial ...passed 00:12:48.567 Test: blockdev reset ...[2024-11-20 16:58:06.967585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:48.567 [2024-11-20 16:58:06.967811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2089310 (9): Bad file descriptor 00:12:48.567 [2024-11-20 16:58:06.988024] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:48.567 passed 00:12:48.567 Test: blockdev write read 8 blocks ...passed 00:12:48.567 Test: blockdev write read size > 128k ...passed 00:12:48.567 Test: blockdev write read invalid size ...passed 00:12:48.567 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:48.567 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:48.567 Test: blockdev write read max offset ...passed 00:12:48.567 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:48.567 Test: blockdev writev readv 8 blocks ...passed 00:12:48.567 Test: blockdev writev readv 30 x 1block ...passed 00:12:48.567 Test: blockdev writev readv block ...passed 00:12:48.567 Test: blockdev writev readv size > 128k ...passed 00:12:48.567 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:48.567 Test: blockdev comparev and writev ...[2024-11-20 16:58:06.996048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:48.567 [2024-11-20 16:58:06.996088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:48.567 [2024-11-20 16:58:06.996108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:48.567 [2024-11-20 16:58:06.996119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:48.567 [2024-11-20 16:58:06.996420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:48.567 [2024-11-20 16:58:06.996443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:48.568 [2024-11-20 16:58:06.996461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:48.568 [2024-11-20 16:58:06.996471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:48.568 [2024-11-20 16:58:06.996748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:48.568 [2024-11-20 16:58:06.996764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:48.568 [2024-11-20 16:58:06.996781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:48.568 [2024-11-20 16:58:06.996791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:48.568 [2024-11-20 16:58:06.997081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:48.568 [2024-11-20 16:58:06.997104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:48.568 [2024-11-20 16:58:06.997121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:48.568 [2024-11-20 16:58:06.997132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:48.568 passed 00:12:48.568 Test: blockdev nvme passthru rw ...passed 00:12:48.568 Test: blockdev nvme passthru vendor specific ...[2024-11-20 16:58:06.998025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:48.568 [2024-11-20 16:58:06.998055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:48.568 [2024-11-20 16:58:06.998157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:48.568 [2024-11-20 16:58:06.998174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:48.568 [2024-11-20 16:58:06.998275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:48.568 [2024-11-20 16:58:06.998291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:48.568 [2024-11-20 16:58:06.998394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:48.568 [2024-11-20 16:58:06.998423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:48.568 passed 00:12:48.568 Test: blockdev nvme admin passthru ...passed 00:12:48.568 Test: blockdev copy ...passed 00:12:48.568 00:12:48.568 Run Summary: Type Total Ran Passed Failed Inactive 00:12:48.568 suites 1 1 n/a 0 0 00:12:48.568 tests 23 23 23 0 0 00:12:48.568 asserts 152 152 152 0 n/a 00:12:48.568 00:12:48.568 Elapsed time = 0.182 seconds 00:12:48.827 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.827 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.827 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:49.086 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.086 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:49.086 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:12:49.086 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:49.086 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:12:49.086 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:49.086 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:12:49.086 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:49.086 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:49.086 rmmod nvme_tcp 00:12:49.086 rmmod nvme_fabrics 00:12:49.086 rmmod nvme_keyring 00:12:49.086 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:49.086 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:12:49.086 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:12:49.086 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 70975 ']' 00:12:49.086 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 70975 00:12:49.086 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 70975 ']' 00:12:49.086 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 70975 00:12:49.086 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:12:49.086 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:49.086 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70975 00:12:49.086 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:49.086 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:49.086 killing process with pid 70975 00:12:49.086 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70975' 00:12:49.086 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 70975 00:12:49.086 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 70975 00:12:49.345 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:49.345 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:49.345 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:49.346 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:12:49.346 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:12:49.346 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:49.346 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:12:49.346 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:49.346 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:49.346 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:49.346 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:49.605 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:49.605 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:49.605 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:49.605 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:49.605 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:49.605 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:49.605 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:49.605 16:58:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:49.605 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:49.605 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:49.605 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:49.605 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:49.605 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.605 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.605 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.605 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:12:49.605 00:12:49.605 real 0m3.466s 00:12:49.605 user 0m10.650s 00:12:49.605 sys 0m1.414s 00:12:49.605 ************************************ 00:12:49.605 END TEST nvmf_bdevio_no_huge 00:12:49.605 ************************************ 00:12:49.605 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.605 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:49.605 16:58:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:49.605 16:58:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:49.605 16:58:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.605 16:58:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:49.864 ************************************ 00:12:49.864 START TEST nvmf_tls 00:12:49.864 ************************************ 00:12:49.864 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:49.864 * Looking for test storage... 00:12:49.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:49.864 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:49.864 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:12:49.864 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:49.864 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:49.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.865 --rc genhtml_branch_coverage=1 00:12:49.865 --rc genhtml_function_coverage=1 00:12:49.865 --rc genhtml_legend=1 00:12:49.865 --rc geninfo_all_blocks=1 00:12:49.865 --rc geninfo_unexecuted_blocks=1 00:12:49.865 00:12:49.865 ' 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:49.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.865 --rc genhtml_branch_coverage=1 00:12:49.865 --rc genhtml_function_coverage=1 00:12:49.865 --rc genhtml_legend=1 00:12:49.865 --rc geninfo_all_blocks=1 00:12:49.865 --rc geninfo_unexecuted_blocks=1 00:12:49.865 00:12:49.865 ' 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:49.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.865 --rc genhtml_branch_coverage=1 00:12:49.865 --rc genhtml_function_coverage=1 00:12:49.865 --rc genhtml_legend=1 00:12:49.865 --rc geninfo_all_blocks=1 00:12:49.865 --rc geninfo_unexecuted_blocks=1 00:12:49.865 00:12:49.865 ' 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:49.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.865 --rc genhtml_branch_coverage=1 00:12:49.865 --rc genhtml_function_coverage=1 00:12:49.865 --rc genhtml_legend=1 00:12:49.865 --rc geninfo_all_blocks=1 00:12:49.865 --rc geninfo_unexecuted_blocks=1 00:12:49.865 00:12:49.865 ' 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:49.865 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.865 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:49.866 Cannot find device "nvmf_init_br" 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:49.866 Cannot find device "nvmf_init_br2" 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:49.866 Cannot find device "nvmf_tgt_br" 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:49.866 Cannot find device "nvmf_tgt_br2" 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:49.866 Cannot find device "nvmf_init_br" 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:12:49.866 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:50.125 Cannot find device "nvmf_init_br2" 00:12:50.125 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:12:50.125 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:50.125 Cannot find device "nvmf_tgt_br" 00:12:50.125 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:12:50.125 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:50.125 Cannot find device "nvmf_tgt_br2" 00:12:50.125 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:12:50.125 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:50.125 Cannot find device "nvmf_br" 00:12:50.125 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:12:50.125 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:50.125 Cannot find device "nvmf_init_if" 00:12:50.125 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:12:50.125 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:50.125 Cannot find device "nvmf_init_if2" 00:12:50.125 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:12:50.125 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:50.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:50.125 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:50.126 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:50.126 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:50.385 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:50.385 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:12:50.385 00:12:50.385 --- 10.0.0.3 ping statistics --- 00:12:50.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.385 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:50.385 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:50.385 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:12:50.385 00:12:50.385 --- 10.0.0.4 ping statistics --- 00:12:50.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.385 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:50.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:12:50.385 00:12:50.385 --- 10.0.0.1 ping statistics --- 00:12:50.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.385 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:50.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:12:50.385 00:12:50.385 --- 10.0.0.2 ping statistics --- 00:12:50.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.385 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71257 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71257 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71257 ']' 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.385 16:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:50.385 [2024-11-20 16:58:08.816553] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:12:50.385 [2024-11-20 16:58:08.816656] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.644 [2024-11-20 16:58:08.969502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.645 [2024-11-20 16:58:09.031500] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.645 [2024-11-20 16:58:09.031579] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.645 [2024-11-20 16:58:09.031603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.645 [2024-11-20 16:58:09.031614] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.645 [2024-11-20 16:58:09.031623] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.645 [2024-11-20 16:58:09.032122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.580 16:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.580 16:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:51.580 16:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:51.580 16:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:51.580 16:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:51.581 16:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.581 16:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:12:51.581 16:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:51.581 true 00:12:51.581 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:12:51.581 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:51.840 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:12:51.840 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:12:51.840 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:52.105 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:52.105 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:12:52.364 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:12:52.364 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:12:52.364 16:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:52.624 16:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:52.624 16:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:12:52.884 16:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:12:52.884 16:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:12:52.884 16:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:52.884 16:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:12:53.143 16:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:12:53.144 16:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:12:53.144 16:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:12:53.407 16:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:53.407 16:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:12:53.666 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:12:53.666 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:12:53.666 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:12:53.926 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:12:53.926 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:54.185 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:12:54.185 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:12:54.185 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:12:54.185 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:12:54.185 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:12:54.185 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:12:54.185 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:12:54.185 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:12:54.185 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:12:54.185 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:54.185 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:12:54.185 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:12:54.185 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:12:54.185 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:12:54.185 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:12:54.185 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:12:54.185 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:12:54.185 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:54.185 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:12:54.185 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.3XBbl4goVw 00:12:54.185 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:12:54.185 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.Xg0ZBxVdI7 00:12:54.185 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:54.186 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:54.186 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.3XBbl4goVw 00:12:54.186 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.Xg0ZBxVdI7 00:12:54.186 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:54.445 16:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:54.704 [2024-11-20 16:58:13.234618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:54.964 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.3XBbl4goVw 00:12:54.964 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.3XBbl4goVw 00:12:54.964 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:55.222 [2024-11-20 16:58:13.555438] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.222 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:55.481 16:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:12:55.740 [2024-11-20 16:58:14.023635] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:55.740 [2024-11-20 16:58:14.023935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:55.740 16:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:55.999 malloc0 00:12:55.999 16:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:56.257 16:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.3XBbl4goVw 00:12:56.516 16:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:12:56.775 16:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.3XBbl4goVw 00:13:06.817 Initializing NVMe Controllers 00:13:06.817 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:06.817 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:06.817 Initialization complete. Launching workers. 00:13:06.817 ======================================================== 00:13:06.817 Latency(us) 00:13:06.817 Device Information : IOPS MiB/s Average min max 00:13:06.817 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10708.18 41.83 5977.95 1408.87 7966.65 00:13:06.817 ======================================================== 00:13:06.817 Total : 10708.18 41.83 5977.95 1408.87 7966.65 00:13:06.817 00:13:06.817 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3XBbl4goVw 00:13:06.817 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:06.817 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:06.817 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:06.817 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3XBbl4goVw 00:13:06.817 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:06.817 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71495 00:13:06.817 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:06.817 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:06.817 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71495 /var/tmp/bdevperf.sock 00:13:06.817 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71495 ']' 00:13:06.817 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:06.817 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:06.817 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:06.817 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.817 16:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:06.817 [2024-11-20 16:58:25.336227] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:13:06.817 [2024-11-20 16:58:25.336333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71495 ] 00:13:07.077 [2024-11-20 16:58:25.490172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.077 [2024-11-20 16:58:25.545977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.077 [2024-11-20 16:58:25.603577] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:08.014 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:08.014 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:08.014 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3XBbl4goVw 00:13:08.273 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:08.273 [2024-11-20 16:58:26.773902] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:08.533 TLSTESTn1 00:13:08.533 16:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:08.533 Running I/O for 10 seconds... 00:13:10.852 4480.00 IOPS, 17.50 MiB/s [2024-11-20T16:58:30.327Z] 4522.50 IOPS, 17.67 MiB/s [2024-11-20T16:58:31.261Z] 4538.33 IOPS, 17.73 MiB/s [2024-11-20T16:58:32.194Z] 4559.75 IOPS, 17.81 MiB/s [2024-11-20T16:58:33.131Z] 4573.60 IOPS, 17.87 MiB/s [2024-11-20T16:58:34.067Z] 4523.00 IOPS, 17.67 MiB/s [2024-11-20T16:58:35.005Z] 4519.00 IOPS, 17.65 MiB/s [2024-11-20T16:58:36.384Z] 4528.88 IOPS, 17.69 MiB/s [2024-11-20T16:58:37.322Z] 4536.67 IOPS, 17.72 MiB/s [2024-11-20T16:58:37.322Z] 4541.00 IOPS, 17.74 MiB/s 00:13:18.786 Latency(us) 00:13:18.786 [2024-11-20T16:58:37.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.786 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:18.786 Verification LBA range: start 0x0 length 0x2000 00:13:18.786 TLSTESTn1 : 10.01 4546.81 17.76 0.00 0.00 28101.47 5421.61 23950.43 00:13:18.786 [2024-11-20T16:58:37.322Z] =================================================================================================================== 00:13:18.786 [2024-11-20T16:58:37.322Z] Total : 4546.81 17.76 0.00 0.00 28101.47 5421.61 23950.43 00:13:18.786 { 00:13:18.786 "results": [ 00:13:18.786 { 00:13:18.786 "job": "TLSTESTn1", 00:13:18.786 "core_mask": "0x4", 00:13:18.786 "workload": "verify", 00:13:18.786 "status": "finished", 00:13:18.786 "verify_range": { 00:13:18.786 "start": 0, 00:13:18.786 "length": 8192 00:13:18.786 }, 00:13:18.786 "queue_depth": 128, 00:13:18.786 "io_size": 4096, 00:13:18.786 "runtime": 10.014944, 00:13:18.786 "iops": 4546.805254228081, 00:13:18.786 "mibps": 17.760958024328442, 00:13:18.786 "io_failed": 0, 00:13:18.786 "io_timeout": 0, 00:13:18.786 "avg_latency_us": 28101.46982687025, 00:13:18.786 "min_latency_us": 5421.614545454546, 00:13:18.786 "max_latency_us": 23950.429090909092 00:13:18.786 } 00:13:18.786 ], 00:13:18.786 "core_count": 1 00:13:18.786 } 00:13:18.786 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:18.786 16:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71495 00:13:18.786 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71495 ']' 00:13:18.786 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71495 00:13:18.786 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:18.786 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:18.786 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71495 00:13:18.786 killing process with pid 71495 00:13:18.786 Received shutdown signal, test time was about 10.000000 seconds 00:13:18.786 00:13:18.786 Latency(us) 00:13:18.786 [2024-11-20T16:58:37.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.786 [2024-11-20T16:58:37.322Z] =================================================================================================================== 00:13:18.786 [2024-11-20T16:58:37.322Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:18.786 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:18.786 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:18.786 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71495' 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71495 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71495 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Xg0ZBxVdI7 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Xg0ZBxVdI7 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Xg0ZBxVdI7 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Xg0ZBxVdI7 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71631 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71631 /var/tmp/bdevperf.sock 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71631 ']' 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:18.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:18.787 16:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:18.787 [2024-11-20 16:58:37.285366] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:13:18.787 [2024-11-20 16:58:37.285666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71631 ] 00:13:19.046 [2024-11-20 16:58:37.428171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.046 [2024-11-20 16:58:37.473732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.046 [2024-11-20 16:58:37.528109] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:19.983 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:19.983 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:19.983 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Xg0ZBxVdI7 00:13:19.983 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:20.243 [2024-11-20 16:58:38.704744] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:20.243 [2024-11-20 16:58:38.710118] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:20.243 [2024-11-20 16:58:38.710790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ebfb0 (107): Transport endpoint is not connected 00:13:20.243 [2024-11-20 16:58:38.711770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ebfb0 (9): Bad file descriptor 00:13:20.243 [2024-11-20 16:58:38.712780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:20.243 [2024-11-20 16:58:38.713071] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:20.243 [2024-11-20 16:58:38.713153] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:20.243 [2024-11-20 16:58:38.713243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:20.243 request: 00:13:20.243 { 00:13:20.243 "name": "TLSTEST", 00:13:20.243 "trtype": "tcp", 00:13:20.243 "traddr": "10.0.0.3", 00:13:20.243 "adrfam": "ipv4", 00:13:20.243 "trsvcid": "4420", 00:13:20.243 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:20.243 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:20.243 "prchk_reftag": false, 00:13:20.243 "prchk_guard": false, 00:13:20.243 "hdgst": false, 00:13:20.243 "ddgst": false, 00:13:20.243 "psk": "key0", 00:13:20.243 "allow_unrecognized_csi": false, 00:13:20.243 "method": "bdev_nvme_attach_controller", 00:13:20.243 "req_id": 1 00:13:20.243 } 00:13:20.243 Got JSON-RPC error response 00:13:20.243 response: 00:13:20.243 { 00:13:20.243 "code": -5, 00:13:20.243 "message": "Input/output error" 00:13:20.243 } 00:13:20.243 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71631 00:13:20.243 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71631 ']' 00:13:20.243 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71631 00:13:20.243 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:20.243 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:20.243 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71631 00:13:20.243 killing process with pid 71631 00:13:20.243 Received shutdown signal, test time was about 10.000000 seconds 00:13:20.243 00:13:20.243 Latency(us) 00:13:20.243 [2024-11-20T16:58:38.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.243 [2024-11-20T16:58:38.779Z] =================================================================================================================== 00:13:20.243 [2024-11-20T16:58:38.779Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:20.243 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:20.243 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:20.243 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71631' 00:13:20.243 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71631 00:13:20.243 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71631 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3XBbl4goVw 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3XBbl4goVw 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3XBbl4goVw 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3XBbl4goVw 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71658 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71658 /var/tmp/bdevperf.sock 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71658 ']' 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:20.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.503 16:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:20.503 [2024-11-20 16:58:39.026302] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:13:20.503 [2024-11-20 16:58:39.026589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71658 ] 00:13:20.763 [2024-11-20 16:58:39.174042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.763 [2024-11-20 16:58:39.229524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.763 [2024-11-20 16:58:39.282230] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:21.699 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:21.699 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:21.699 16:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3XBbl4goVw 00:13:21.958 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:13:22.218 [2024-11-20 16:58:40.501726] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:22.218 [2024-11-20 16:58:40.508179] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:22.218 [2024-11-20 16:58:40.508220] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:22.218 [2024-11-20 16:58:40.508270] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:22.218 [2024-11-20 16:58:40.508456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2536fb0 (107): Transport endpoint is not connected 00:13:22.218 [2024-11-20 16:58:40.509447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2536fb0 (9): Bad file descriptor 00:13:22.218 [2024-11-20 16:58:40.510444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:22.218 [2024-11-20 16:58:40.510623] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:22.218 [2024-11-20 16:58:40.510664] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:22.218 [2024-11-20 16:58:40.510682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:22.218 request: 00:13:22.218 { 00:13:22.218 "name": "TLSTEST", 00:13:22.218 "trtype": "tcp", 00:13:22.218 "traddr": "10.0.0.3", 00:13:22.218 "adrfam": "ipv4", 00:13:22.218 "trsvcid": "4420", 00:13:22.218 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:22.218 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:22.218 "prchk_reftag": false, 00:13:22.218 "prchk_guard": false, 00:13:22.218 "hdgst": false, 00:13:22.218 "ddgst": false, 00:13:22.218 "psk": "key0", 00:13:22.218 "allow_unrecognized_csi": false, 00:13:22.218 "method": "bdev_nvme_attach_controller", 00:13:22.218 "req_id": 1 00:13:22.218 } 00:13:22.218 Got JSON-RPC error response 00:13:22.218 response: 00:13:22.218 { 00:13:22.218 "code": -5, 00:13:22.218 "message": "Input/output error" 00:13:22.218 } 00:13:22.218 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71658 00:13:22.218 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71658 ']' 00:13:22.218 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71658 00:13:22.218 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:22.218 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.218 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71658 00:13:22.218 killing process with pid 71658 00:13:22.218 Received shutdown signal, test time was about 10.000000 seconds 00:13:22.218 00:13:22.218 Latency(us) 00:13:22.218 [2024-11-20T16:58:40.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.218 [2024-11-20T16:58:40.754Z] =================================================================================================================== 00:13:22.218 [2024-11-20T16:58:40.754Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:22.218 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:22.218 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:22.218 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71658' 00:13:22.218 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71658 00:13:22.218 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71658 00:13:22.218 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:22.218 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:22.218 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3XBbl4goVw 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3XBbl4goVw 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3XBbl4goVw 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3XBbl4goVw 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71688 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71688 /var/tmp/bdevperf.sock 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71688 ']' 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:22.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.219 16:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:22.495 [2024-11-20 16:58:40.802637] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:13:22.495 [2024-11-20 16:58:40.803463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71688 ] 00:13:22.495 [2024-11-20 16:58:40.950653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.495 [2024-11-20 16:58:41.000213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.761 [2024-11-20 16:58:41.053605] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:23.327 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.327 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:23.327 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3XBbl4goVw 00:13:23.586 16:58:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:23.844 [2024-11-20 16:58:42.228037] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:23.844 [2024-11-20 16:58:42.238336] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:23.844 [2024-11-20 16:58:42.238377] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:23.844 [2024-11-20 16:58:42.238440] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:23.844 [2024-11-20 16:58:42.238790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226afb0 (107): Transport endpoint is not connected 00:13:23.844 [2024-11-20 16:58:42.239782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226afb0 (9): Bad file descriptor 00:13:23.844 [2024-11-20 16:58:42.240783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:13:23.844 [2024-11-20 16:58:42.240805] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:23.844 [2024-11-20 16:58:42.240826] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:13:23.844 [2024-11-20 16:58:42.240847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:13:23.844 request: 00:13:23.844 { 00:13:23.844 "name": "TLSTEST", 00:13:23.844 "trtype": "tcp", 00:13:23.844 "traddr": "10.0.0.3", 00:13:23.844 "adrfam": "ipv4", 00:13:23.844 "trsvcid": "4420", 00:13:23.844 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:23.844 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:23.844 "prchk_reftag": false, 00:13:23.844 "prchk_guard": false, 00:13:23.844 "hdgst": false, 00:13:23.844 "ddgst": false, 00:13:23.844 "psk": "key0", 00:13:23.844 "allow_unrecognized_csi": false, 00:13:23.844 "method": "bdev_nvme_attach_controller", 00:13:23.844 "req_id": 1 00:13:23.844 } 00:13:23.844 Got JSON-RPC error response 00:13:23.844 response: 00:13:23.844 { 00:13:23.844 "code": -5, 00:13:23.844 "message": "Input/output error" 00:13:23.844 } 00:13:23.844 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71688 00:13:23.844 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71688 ']' 00:13:23.844 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71688 00:13:23.844 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:23.844 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:23.844 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71688 00:13:23.844 killing process with pid 71688 00:13:23.844 Received shutdown signal, test time was about 10.000000 seconds 00:13:23.844 00:13:23.844 Latency(us) 00:13:23.844 [2024-11-20T16:58:42.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.844 [2024-11-20T16:58:42.380Z] =================================================================================================================== 00:13:23.844 [2024-11-20T16:58:42.380Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:23.844 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:23.844 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:23.844 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71688' 00:13:23.844 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71688 00:13:23.844 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71688 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71722 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71722 /var/tmp/bdevperf.sock 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71722 ']' 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:24.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.102 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:24.102 [2024-11-20 16:58:42.522992] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:13:24.102 [2024-11-20 16:58:42.523814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71722 ] 00:13:24.361 [2024-11-20 16:58:42.672622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.361 [2024-11-20 16:58:42.728564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.361 [2024-11-20 16:58:42.784181] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:24.361 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:24.361 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:24.361 16:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:13:24.619 [2024-11-20 16:58:43.105937] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:13:24.619 [2024-11-20 16:58:43.105977] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:24.619 request: 00:13:24.619 { 00:13:24.619 "name": "key0", 00:13:24.619 "path": "", 00:13:24.619 "method": "keyring_file_add_key", 00:13:24.619 "req_id": 1 00:13:24.619 } 00:13:24.619 Got JSON-RPC error response 00:13:24.619 response: 00:13:24.619 { 00:13:24.619 "code": -1, 00:13:24.619 "message": "Operation not permitted" 00:13:24.619 } 00:13:24.619 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:24.878 [2024-11-20 16:58:43.330082] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:24.878 [2024-11-20 16:58:43.330147] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:24.878 request: 00:13:24.878 { 00:13:24.878 "name": "TLSTEST", 00:13:24.878 "trtype": "tcp", 00:13:24.878 "traddr": "10.0.0.3", 00:13:24.878 "adrfam": "ipv4", 00:13:24.878 "trsvcid": "4420", 00:13:24.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.878 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:24.878 "prchk_reftag": false, 00:13:24.878 "prchk_guard": false, 00:13:24.878 "hdgst": false, 00:13:24.878 "ddgst": false, 00:13:24.878 "psk": "key0", 00:13:24.878 "allow_unrecognized_csi": false, 00:13:24.878 "method": "bdev_nvme_attach_controller", 00:13:24.878 "req_id": 1 00:13:24.878 } 00:13:24.878 Got JSON-RPC error response 00:13:24.878 response: 00:13:24.878 { 00:13:24.878 "code": -126, 00:13:24.878 "message": "Required key not available" 00:13:24.878 } 00:13:24.878 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71722 00:13:24.878 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71722 ']' 00:13:24.878 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71722 00:13:24.878 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:24.878 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:24.878 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71722 00:13:24.878 killing process with pid 71722 00:13:24.878 Received shutdown signal, test time was about 10.000000 seconds 00:13:24.878 00:13:24.878 Latency(us) 00:13:24.878 [2024-11-20T16:58:43.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.878 [2024-11-20T16:58:43.414Z] =================================================================================================================== 00:13:24.878 [2024-11-20T16:58:43.414Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:24.878 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:24.878 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:24.878 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71722' 00:13:24.878 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71722 00:13:24.878 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71722 00:13:25.138 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:25.138 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:25.138 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:25.138 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:25.138 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:25.138 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71257 00:13:25.138 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71257 ']' 00:13:25.138 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71257 00:13:25.138 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:25.138 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:25.138 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71257 00:13:25.138 killing process with pid 71257 00:13:25.138 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:25.138 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:25.138 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71257' 00:13:25.138 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71257 00:13:25.138 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71257 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.YwUiPy5WQg 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.YwUiPy5WQg 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:25.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71753 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71753 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71753 ']' 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:25.397 16:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:25.397 [2024-11-20 16:58:43.908389] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:13:25.397 [2024-11-20 16:58:43.909204] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.657 [2024-11-20 16:58:44.056878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.657 [2024-11-20 16:58:44.099092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.657 [2024-11-20 16:58:44.099408] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.657 [2024-11-20 16:58:44.099569] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.657 [2024-11-20 16:58:44.099622] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.657 [2024-11-20 16:58:44.099718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.657 [2024-11-20 16:58:44.100179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.657 [2024-11-20 16:58:44.153032] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:26.592 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:26.592 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:26.592 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:26.592 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:26.592 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:26.592 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.592 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.YwUiPy5WQg 00:13:26.592 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.YwUiPy5WQg 00:13:26.592 16:58:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:26.850 [2024-11-20 16:58:45.189589] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.850 16:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:27.109 16:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:27.368 [2024-11-20 16:58:45.689918] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:27.368 [2024-11-20 16:58:45.690177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:27.368 16:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:27.626 malloc0 00:13:27.626 16:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:27.885 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.YwUiPy5WQg 00:13:28.143 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:28.403 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YwUiPy5WQg 00:13:28.403 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:28.403 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:28.403 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:28.403 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YwUiPy5WQg 00:13:28.403 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:28.403 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71809 00:13:28.403 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:28.403 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:28.403 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71809 /var/tmp/bdevperf.sock 00:13:28.403 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71809 ']' 00:13:28.403 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:28.403 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.403 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:28.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:28.403 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.403 16:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:28.403 [2024-11-20 16:58:46.739445] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:13:28.403 [2024-11-20 16:58:46.740175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71809 ] 00:13:28.403 [2024-11-20 16:58:46.881927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.403 [2024-11-20 16:58:46.933018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.662 [2024-11-20 16:58:46.985499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:29.230 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.230 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:29.230 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YwUiPy5WQg 00:13:29.489 16:58:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:29.748 [2024-11-20 16:58:48.152185] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:29.748 TLSTESTn1 00:13:29.748 16:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:30.007 Running I/O for 10 seconds... 00:13:31.875 4399.00 IOPS, 17.18 MiB/s [2024-11-20T16:58:51.786Z] 4479.00 IOPS, 17.50 MiB/s [2024-11-20T16:58:52.724Z] 4506.67 IOPS, 17.60 MiB/s [2024-11-20T16:58:53.660Z] 4530.00 IOPS, 17.70 MiB/s [2024-11-20T16:58:54.595Z] 4536.60 IOPS, 17.72 MiB/s [2024-11-20T16:58:55.531Z] 4537.50 IOPS, 17.72 MiB/s [2024-11-20T16:58:56.466Z] 4535.86 IOPS, 17.72 MiB/s [2024-11-20T16:58:57.400Z] 4537.88 IOPS, 17.73 MiB/s [2024-11-20T16:58:58.790Z] 4536.56 IOPS, 17.72 MiB/s [2024-11-20T16:58:58.790Z] 4526.30 IOPS, 17.68 MiB/s 00:13:40.254 Latency(us) 00:13:40.254 [2024-11-20T16:58:58.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:40.254 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:40.254 Verification LBA range: start 0x0 length 0x2000 00:13:40.254 TLSTESTn1 : 10.01 4532.29 17.70 0.00 0.00 28193.00 5570.56 22043.93 00:13:40.254 [2024-11-20T16:58:58.790Z] =================================================================================================================== 00:13:40.254 [2024-11-20T16:58:58.790Z] Total : 4532.29 17.70 0.00 0.00 28193.00 5570.56 22043.93 00:13:40.254 { 00:13:40.254 "results": [ 00:13:40.254 { 00:13:40.254 "job": "TLSTESTn1", 00:13:40.254 "core_mask": "0x4", 00:13:40.254 "workload": "verify", 00:13:40.254 "status": "finished", 00:13:40.254 "verify_range": { 00:13:40.254 "start": 0, 00:13:40.254 "length": 8192 00:13:40.254 }, 00:13:40.254 "queue_depth": 128, 00:13:40.254 "io_size": 4096, 00:13:40.254 "runtime": 10.014799, 00:13:40.255 "iops": 4532.292660092329, 00:13:40.255 "mibps": 17.70426820348566, 00:13:40.255 "io_failed": 0, 00:13:40.255 "io_timeout": 0, 00:13:40.255 "avg_latency_us": 28193.003034549063, 00:13:40.255 "min_latency_us": 5570.56, 00:13:40.255 "max_latency_us": 22043.927272727273 00:13:40.255 } 00:13:40.255 ], 00:13:40.255 "core_count": 1 00:13:40.255 } 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71809 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71809 ']' 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71809 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71809 00:13:40.255 killing process with pid 71809 00:13:40.255 Received shutdown signal, test time was about 10.000000 seconds 00:13:40.255 00:13:40.255 Latency(us) 00:13:40.255 [2024-11-20T16:58:58.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:40.255 [2024-11-20T16:58:58.791Z] =================================================================================================================== 00:13:40.255 [2024-11-20T16:58:58.791Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71809' 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71809 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71809 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.YwUiPy5WQg 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YwUiPy5WQg 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YwUiPy5WQg 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YwUiPy5WQg 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YwUiPy5WQg 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:40.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71951 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71951 /var/tmp/bdevperf.sock 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71951 ']' 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:40.255 16:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:40.255 [2024-11-20 16:58:58.706902] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:13:40.255 [2024-11-20 16:58:58.707014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71951 ] 00:13:40.521 [2024-11-20 16:58:58.854933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.521 [2024-11-20 16:58:58.920513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.521 [2024-11-20 16:58:58.975962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:41.458 16:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.458 16:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:41.458 16:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YwUiPy5WQg 00:13:41.458 [2024-11-20 16:58:59.957811] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.YwUiPy5WQg': 0100666 00:13:41.458 [2024-11-20 16:58:59.957877] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:41.458 request: 00:13:41.458 { 00:13:41.458 "name": "key0", 00:13:41.458 "path": "/tmp/tmp.YwUiPy5WQg", 00:13:41.458 "method": "keyring_file_add_key", 00:13:41.458 "req_id": 1 00:13:41.458 } 00:13:41.458 Got JSON-RPC error response 00:13:41.458 response: 00:13:41.458 { 00:13:41.458 "code": -1, 00:13:41.458 "message": "Operation not permitted" 00:13:41.458 } 00:13:41.458 16:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:41.718 [2024-11-20 16:59:00.169939] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:41.718 [2024-11-20 16:59:00.170200] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:41.718 request: 00:13:41.718 { 00:13:41.718 "name": "TLSTEST", 00:13:41.718 "trtype": "tcp", 00:13:41.718 "traddr": "10.0.0.3", 00:13:41.718 "adrfam": "ipv4", 00:13:41.718 "trsvcid": "4420", 00:13:41.718 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:41.718 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:41.718 "prchk_reftag": false, 00:13:41.718 "prchk_guard": false, 00:13:41.718 "hdgst": false, 00:13:41.718 "ddgst": false, 00:13:41.718 "psk": "key0", 00:13:41.718 "allow_unrecognized_csi": false, 00:13:41.718 "method": "bdev_nvme_attach_controller", 00:13:41.718 "req_id": 1 00:13:41.718 } 00:13:41.718 Got JSON-RPC error response 00:13:41.718 response: 00:13:41.718 { 00:13:41.718 "code": -126, 00:13:41.718 "message": "Required key not available" 00:13:41.718 } 00:13:41.718 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71951 00:13:41.718 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71951 ']' 00:13:41.718 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71951 00:13:41.718 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:41.718 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:41.718 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71951 00:13:41.718 killing process with pid 71951 00:13:41.718 Received shutdown signal, test time was about 10.000000 seconds 00:13:41.718 00:13:41.718 Latency(us) 00:13:41.718 [2024-11-20T16:59:00.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.718 [2024-11-20T16:59:00.254Z] =================================================================================================================== 00:13:41.718 [2024-11-20T16:59:00.254Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:41.718 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:41.718 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:41.718 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71951' 00:13:41.718 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71951 00:13:41.718 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71951 00:13:41.978 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:41.978 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:41.978 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:41.978 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:41.978 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:41.978 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71753 00:13:41.978 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71753 ']' 00:13:41.978 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71753 00:13:41.978 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:41.978 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:41.978 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71753 00:13:41.978 killing process with pid 71753 00:13:41.978 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:41.978 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:41.978 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71753' 00:13:41.978 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71753 00:13:41.978 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71753 00:13:42.238 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:13:42.238 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:42.238 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:42.238 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:42.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.238 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71990 00:13:42.238 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:42.238 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71990 00:13:42.238 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71990 ']' 00:13:42.238 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.238 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.238 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.238 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.238 16:59:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:42.497 [2024-11-20 16:59:00.780520] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:13:42.497 [2024-11-20 16:59:00.781060] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.497 [2024-11-20 16:59:00.930628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.497 [2024-11-20 16:59:00.996692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.497 [2024-11-20 16:59:00.997116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.497 [2024-11-20 16:59:00.997255] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.498 [2024-11-20 16:59:00.997406] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.498 [2024-11-20 16:59:00.997418] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.498 [2024-11-20 16:59:00.997869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.757 [2024-11-20 16:59:01.070524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:43.325 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:43.325 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:43.325 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:43.325 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:43.325 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:43.325 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.325 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.YwUiPy5WQg 00:13:43.325 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:43.325 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.YwUiPy5WQg 00:13:43.325 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:13:43.325 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.325 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:13:43.325 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.325 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.YwUiPy5WQg 00:13:43.325 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.YwUiPy5WQg 00:13:43.325 16:59:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:43.584 [2024-11-20 16:59:02.106102] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.843 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:44.102 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:44.102 [2024-11-20 16:59:02.590108] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:44.102 [2024-11-20 16:59:02.590321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:44.102 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:44.361 malloc0 00:13:44.361 16:59:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:44.634 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.YwUiPy5WQg 00:13:44.893 [2024-11-20 16:59:03.309453] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.YwUiPy5WQg': 0100666 00:13:44.893 [2024-11-20 16:59:03.309492] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:44.893 request: 00:13:44.893 { 00:13:44.893 "name": "key0", 00:13:44.893 "path": "/tmp/tmp.YwUiPy5WQg", 00:13:44.893 "method": "keyring_file_add_key", 00:13:44.893 "req_id": 1 00:13:44.893 } 00:13:44.893 Got JSON-RPC error response 00:13:44.893 response: 00:13:44.893 { 00:13:44.893 "code": -1, 00:13:44.893 "message": "Operation not permitted" 00:13:44.893 } 00:13:44.893 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:45.152 [2024-11-20 16:59:03.533520] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:13:45.152 [2024-11-20 16:59:03.533589] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:45.152 request: 00:13:45.152 { 00:13:45.152 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:45.152 "host": "nqn.2016-06.io.spdk:host1", 00:13:45.152 "psk": "key0", 00:13:45.152 "method": "nvmf_subsystem_add_host", 00:13:45.152 "req_id": 1 00:13:45.152 } 00:13:45.152 Got JSON-RPC error response 00:13:45.152 response: 00:13:45.152 { 00:13:45.152 "code": -32603, 00:13:45.152 "message": "Internal error" 00:13:45.152 } 00:13:45.152 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:45.152 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:45.152 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:45.152 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:45.152 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71990 00:13:45.152 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71990 ']' 00:13:45.152 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71990 00:13:45.152 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:45.152 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:45.152 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71990 00:13:45.152 killing process with pid 71990 00:13:45.152 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:45.152 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:45.152 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71990' 00:13:45.152 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71990 00:13:45.152 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71990 00:13:45.411 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.YwUiPy5WQg 00:13:45.411 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:13:45.411 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:45.411 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:45.411 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:45.411 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72054 00:13:45.411 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:45.411 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72054 00:13:45.411 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72054 ']' 00:13:45.411 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.411 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:45.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.411 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.411 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:45.411 16:59:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:45.411 [2024-11-20 16:59:03.854456] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:13:45.411 [2024-11-20 16:59:03.854769] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.669 [2024-11-20 16:59:04.001515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.669 [2024-11-20 16:59:04.049207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.669 [2024-11-20 16:59:04.049272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.669 [2024-11-20 16:59:04.049298] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.669 [2024-11-20 16:59:04.049305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.669 [2024-11-20 16:59:04.049312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.669 [2024-11-20 16:59:04.049698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.669 [2024-11-20 16:59:04.102757] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:45.669 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.669 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:45.669 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:45.669 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:45.669 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:45.928 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.928 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.YwUiPy5WQg 00:13:45.928 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.YwUiPy5WQg 00:13:45.928 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:46.187 [2024-11-20 16:59:04.485166] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.187 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:46.447 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:46.447 [2024-11-20 16:59:04.973286] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:46.447 [2024-11-20 16:59:04.973503] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:46.706 16:59:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:46.965 malloc0 00:13:46.965 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:46.965 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.YwUiPy5WQg 00:13:47.532 16:59:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:47.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:47.790 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72103 00:13:47.790 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:47.790 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:47.790 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72103 /var/tmp/bdevperf.sock 00:13:47.790 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72103 ']' 00:13:47.790 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:47.790 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:47.790 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:47.791 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:47.791 16:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.791 [2024-11-20 16:59:06.121718] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:13:47.791 [2024-11-20 16:59:06.122457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72103 ] 00:13:47.791 [2024-11-20 16:59:06.260860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.791 [2024-11-20 16:59:06.311824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.049 [2024-11-20 16:59:06.366411] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:48.620 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:48.620 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:48.620 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YwUiPy5WQg 00:13:48.880 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:49.139 [2024-11-20 16:59:07.554687] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:49.139 TLSTESTn1 00:13:49.139 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:49.707 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:13:49.707 "subsystems": [ 00:13:49.707 { 00:13:49.707 "subsystem": "keyring", 00:13:49.707 "config": [ 00:13:49.707 { 00:13:49.707 "method": "keyring_file_add_key", 00:13:49.707 "params": { 00:13:49.707 "name": "key0", 00:13:49.707 "path": "/tmp/tmp.YwUiPy5WQg" 00:13:49.707 } 00:13:49.707 } 00:13:49.707 ] 00:13:49.707 }, 00:13:49.707 { 00:13:49.707 "subsystem": "iobuf", 00:13:49.707 "config": [ 00:13:49.707 { 00:13:49.707 "method": "iobuf_set_options", 00:13:49.707 "params": { 00:13:49.707 "small_pool_count": 8192, 00:13:49.707 "large_pool_count": 1024, 00:13:49.707 "small_bufsize": 8192, 00:13:49.707 "large_bufsize": 135168, 00:13:49.707 "enable_numa": false 00:13:49.707 } 00:13:49.707 } 00:13:49.707 ] 00:13:49.707 }, 00:13:49.707 { 00:13:49.707 "subsystem": "sock", 00:13:49.707 "config": [ 00:13:49.707 { 00:13:49.707 "method": "sock_set_default_impl", 00:13:49.707 "params": { 00:13:49.707 "impl_name": "uring" 00:13:49.707 } 00:13:49.707 }, 00:13:49.707 { 00:13:49.707 "method": "sock_impl_set_options", 00:13:49.707 "params": { 00:13:49.707 "impl_name": "ssl", 00:13:49.707 "recv_buf_size": 4096, 00:13:49.707 "send_buf_size": 4096, 00:13:49.707 "enable_recv_pipe": true, 00:13:49.707 "enable_quickack": false, 00:13:49.707 "enable_placement_id": 0, 00:13:49.707 "enable_zerocopy_send_server": true, 00:13:49.707 "enable_zerocopy_send_client": false, 00:13:49.707 "zerocopy_threshold": 0, 00:13:49.707 "tls_version": 0, 00:13:49.707 "enable_ktls": false 00:13:49.707 } 00:13:49.707 }, 00:13:49.707 { 00:13:49.707 "method": "sock_impl_set_options", 00:13:49.707 "params": { 00:13:49.707 "impl_name": "posix", 00:13:49.707 "recv_buf_size": 2097152, 00:13:49.707 "send_buf_size": 2097152, 00:13:49.707 "enable_recv_pipe": true, 00:13:49.707 "enable_quickack": false, 00:13:49.707 "enable_placement_id": 0, 00:13:49.707 "enable_zerocopy_send_server": true, 00:13:49.707 "enable_zerocopy_send_client": false, 00:13:49.707 "zerocopy_threshold": 0, 00:13:49.707 "tls_version": 0, 00:13:49.707 "enable_ktls": false 00:13:49.707 } 00:13:49.707 }, 00:13:49.707 { 00:13:49.707 "method": "sock_impl_set_options", 00:13:49.707 "params": { 00:13:49.707 "impl_name": "uring", 00:13:49.707 "recv_buf_size": 2097152, 00:13:49.707 "send_buf_size": 2097152, 00:13:49.707 "enable_recv_pipe": true, 00:13:49.707 "enable_quickack": false, 00:13:49.707 "enable_placement_id": 0, 00:13:49.707 "enable_zerocopy_send_server": false, 00:13:49.707 "enable_zerocopy_send_client": false, 00:13:49.707 "zerocopy_threshold": 0, 00:13:49.707 "tls_version": 0, 00:13:49.707 "enable_ktls": false 00:13:49.707 } 00:13:49.707 } 00:13:49.707 ] 00:13:49.707 }, 00:13:49.707 { 00:13:49.707 "subsystem": "vmd", 00:13:49.707 "config": [] 00:13:49.707 }, 00:13:49.707 { 00:13:49.707 "subsystem": "accel", 00:13:49.707 "config": [ 00:13:49.707 { 00:13:49.707 "method": "accel_set_options", 00:13:49.707 "params": { 00:13:49.707 "small_cache_size": 128, 00:13:49.707 "large_cache_size": 16, 00:13:49.707 "task_count": 2048, 00:13:49.707 "sequence_count": 2048, 00:13:49.707 "buf_count": 2048 00:13:49.707 } 00:13:49.707 } 00:13:49.707 ] 00:13:49.707 }, 00:13:49.707 { 00:13:49.707 "subsystem": "bdev", 00:13:49.707 "config": [ 00:13:49.707 { 00:13:49.707 "method": "bdev_set_options", 00:13:49.707 "params": { 00:13:49.707 "bdev_io_pool_size": 65535, 00:13:49.707 "bdev_io_cache_size": 256, 00:13:49.707 "bdev_auto_examine": true, 00:13:49.707 "iobuf_small_cache_size": 128, 00:13:49.708 "iobuf_large_cache_size": 16 00:13:49.708 } 00:13:49.708 }, 00:13:49.708 { 00:13:49.708 "method": "bdev_raid_set_options", 00:13:49.708 "params": { 00:13:49.708 "process_window_size_kb": 1024, 00:13:49.708 "process_max_bandwidth_mb_sec": 0 00:13:49.708 } 00:13:49.708 }, 00:13:49.708 { 00:13:49.708 "method": "bdev_iscsi_set_options", 00:13:49.708 "params": { 00:13:49.708 "timeout_sec": 30 00:13:49.708 } 00:13:49.708 }, 00:13:49.708 { 00:13:49.708 "method": "bdev_nvme_set_options", 00:13:49.708 "params": { 00:13:49.708 "action_on_timeout": "none", 00:13:49.708 "timeout_us": 0, 00:13:49.708 "timeout_admin_us": 0, 00:13:49.708 "keep_alive_timeout_ms": 10000, 00:13:49.708 "arbitration_burst": 0, 00:13:49.708 "low_priority_weight": 0, 00:13:49.708 "medium_priority_weight": 0, 00:13:49.708 "high_priority_weight": 0, 00:13:49.708 "nvme_adminq_poll_period_us": 10000, 00:13:49.708 "nvme_ioq_poll_period_us": 0, 00:13:49.708 "io_queue_requests": 0, 00:13:49.708 "delay_cmd_submit": true, 00:13:49.708 "transport_retry_count": 4, 00:13:49.708 "bdev_retry_count": 3, 00:13:49.708 "transport_ack_timeout": 0, 00:13:49.708 "ctrlr_loss_timeout_sec": 0, 00:13:49.708 "reconnect_delay_sec": 0, 00:13:49.708 "fast_io_fail_timeout_sec": 0, 00:13:49.708 "disable_auto_failback": false, 00:13:49.708 "generate_uuids": false, 00:13:49.708 "transport_tos": 0, 00:13:49.708 "nvme_error_stat": false, 00:13:49.708 "rdma_srq_size": 0, 00:13:49.708 "io_path_stat": false, 00:13:49.708 "allow_accel_sequence": false, 00:13:49.708 "rdma_max_cq_size": 0, 00:13:49.708 "rdma_cm_event_timeout_ms": 0, 00:13:49.708 "dhchap_digests": [ 00:13:49.708 "sha256", 00:13:49.708 "sha384", 00:13:49.708 "sha512" 00:13:49.708 ], 00:13:49.708 "dhchap_dhgroups": [ 00:13:49.708 "null", 00:13:49.708 "ffdhe2048", 00:13:49.708 "ffdhe3072", 00:13:49.708 "ffdhe4096", 00:13:49.708 "ffdhe6144", 00:13:49.708 "ffdhe8192" 00:13:49.708 ] 00:13:49.708 } 00:13:49.708 }, 00:13:49.708 { 00:13:49.708 "method": "bdev_nvme_set_hotplug", 00:13:49.708 "params": { 00:13:49.708 "period_us": 100000, 00:13:49.708 "enable": false 00:13:49.708 } 00:13:49.708 }, 00:13:49.708 { 00:13:49.708 "method": "bdev_malloc_create", 00:13:49.708 "params": { 00:13:49.708 "name": "malloc0", 00:13:49.708 "num_blocks": 8192, 00:13:49.708 "block_size": 4096, 00:13:49.708 "physical_block_size": 4096, 00:13:49.708 "uuid": "b6d7cea5-f9ca-40d4-af1f-74d6d529a10a", 00:13:49.708 "optimal_io_boundary": 0, 00:13:49.708 "md_size": 0, 00:13:49.708 "dif_type": 0, 00:13:49.708 "dif_is_head_of_md": false, 00:13:49.708 "dif_pi_format": 0 00:13:49.708 } 00:13:49.708 }, 00:13:49.708 { 00:13:49.708 "method": "bdev_wait_for_examine" 00:13:49.708 } 00:13:49.708 ] 00:13:49.708 }, 00:13:49.708 { 00:13:49.708 "subsystem": "nbd", 00:13:49.708 "config": [] 00:13:49.708 }, 00:13:49.708 { 00:13:49.708 "subsystem": "scheduler", 00:13:49.708 "config": [ 00:13:49.708 { 00:13:49.708 "method": "framework_set_scheduler", 00:13:49.708 "params": { 00:13:49.708 "name": "static" 00:13:49.708 } 00:13:49.708 } 00:13:49.708 ] 00:13:49.708 }, 00:13:49.708 { 00:13:49.708 "subsystem": "nvmf", 00:13:49.708 "config": [ 00:13:49.708 { 00:13:49.708 "method": "nvmf_set_config", 00:13:49.708 "params": { 00:13:49.708 "discovery_filter": "match_any", 00:13:49.708 "admin_cmd_passthru": { 00:13:49.708 "identify_ctrlr": false 00:13:49.708 }, 00:13:49.708 "dhchap_digests": [ 00:13:49.708 "sha256", 00:13:49.708 "sha384", 00:13:49.708 "sha512" 00:13:49.708 ], 00:13:49.708 "dhchap_dhgroups": [ 00:13:49.708 "null", 00:13:49.708 "ffdhe2048", 00:13:49.708 "ffdhe3072", 00:13:49.708 "ffdhe4096", 00:13:49.708 "ffdhe6144", 00:13:49.708 "ffdhe8192" 00:13:49.708 ] 00:13:49.708 } 00:13:49.708 }, 00:13:49.708 { 00:13:49.708 "method": "nvmf_set_max_subsystems", 00:13:49.708 "params": { 00:13:49.708 "max_subsystems": 1024 00:13:49.708 } 00:13:49.708 }, 00:13:49.708 { 00:13:49.708 "method": "nvmf_set_crdt", 00:13:49.708 "params": { 00:13:49.708 "crdt1": 0, 00:13:49.708 "crdt2": 0, 00:13:49.708 "crdt3": 0 00:13:49.708 } 00:13:49.708 }, 00:13:49.708 { 00:13:49.708 "method": "nvmf_create_transport", 00:13:49.708 "params": { 00:13:49.708 "trtype": "TCP", 00:13:49.708 "max_queue_depth": 128, 00:13:49.708 "max_io_qpairs_per_ctrlr": 127, 00:13:49.708 "in_capsule_data_size": 4096, 00:13:49.708 "max_io_size": 131072, 00:13:49.708 "io_unit_size": 131072, 00:13:49.708 "max_aq_depth": 128, 00:13:49.708 "num_shared_buffers": 511, 00:13:49.708 "buf_cache_size": 4294967295, 00:13:49.708 "dif_insert_or_strip": false, 00:13:49.708 "zcopy": false, 00:13:49.708 "c2h_success": false, 00:13:49.708 "sock_priority": 0, 00:13:49.708 "abort_timeout_sec": 1, 00:13:49.708 "ack_timeout": 0, 00:13:49.708 "data_wr_pool_size": 0 00:13:49.708 } 00:13:49.708 }, 00:13:49.708 { 00:13:49.708 "method": "nvmf_create_subsystem", 00:13:49.708 "params": { 00:13:49.708 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:49.708 "allow_any_host": false, 00:13:49.708 "serial_number": "SPDK00000000000001", 00:13:49.708 "model_number": "SPDK bdev Controller", 00:13:49.708 "max_namespaces": 10, 00:13:49.708 "min_cntlid": 1, 00:13:49.708 "max_cntlid": 65519, 00:13:49.708 "ana_reporting": false 00:13:49.708 } 00:13:49.708 }, 00:13:49.708 { 00:13:49.708 "method": "nvmf_subsystem_add_host", 00:13:49.708 "params": { 00:13:49.708 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:49.708 "host": "nqn.2016-06.io.spdk:host1", 00:13:49.708 "psk": "key0" 00:13:49.708 } 00:13:49.708 }, 00:13:49.708 { 00:13:49.708 "method": "nvmf_subsystem_add_ns", 00:13:49.708 "params": { 00:13:49.708 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:49.708 "namespace": { 00:13:49.708 "nsid": 1, 00:13:49.708 "bdev_name": "malloc0", 00:13:49.708 "nguid": "B6D7CEA5F9CA40D4AF1F74D6D529A10A", 00:13:49.708 "uuid": "b6d7cea5-f9ca-40d4-af1f-74d6d529a10a", 00:13:49.708 "no_auto_visible": false 00:13:49.708 } 00:13:49.708 } 00:13:49.708 }, 00:13:49.708 { 00:13:49.708 "method": "nvmf_subsystem_add_listener", 00:13:49.708 "params": { 00:13:49.709 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:49.709 "listen_address": { 00:13:49.709 "trtype": "TCP", 00:13:49.709 "adrfam": "IPv4", 00:13:49.709 "traddr": "10.0.0.3", 00:13:49.709 "trsvcid": "4420" 00:13:49.709 }, 00:13:49.709 "secure_channel": true 00:13:49.709 } 00:13:49.709 } 00:13:49.709 ] 00:13:49.709 } 00:13:49.709 ] 00:13:49.709 }' 00:13:49.709 16:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:49.968 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:13:49.968 "subsystems": [ 00:13:49.968 { 00:13:49.968 "subsystem": "keyring", 00:13:49.968 "config": [ 00:13:49.968 { 00:13:49.968 "method": "keyring_file_add_key", 00:13:49.968 "params": { 00:13:49.968 "name": "key0", 00:13:49.968 "path": "/tmp/tmp.YwUiPy5WQg" 00:13:49.968 } 00:13:49.968 } 00:13:49.968 ] 00:13:49.968 }, 00:13:49.968 { 00:13:49.968 "subsystem": "iobuf", 00:13:49.968 "config": [ 00:13:49.968 { 00:13:49.968 "method": "iobuf_set_options", 00:13:49.968 "params": { 00:13:49.968 "small_pool_count": 8192, 00:13:49.968 "large_pool_count": 1024, 00:13:49.968 "small_bufsize": 8192, 00:13:49.968 "large_bufsize": 135168, 00:13:49.968 "enable_numa": false 00:13:49.968 } 00:13:49.968 } 00:13:49.968 ] 00:13:49.968 }, 00:13:49.968 { 00:13:49.968 "subsystem": "sock", 00:13:49.968 "config": [ 00:13:49.968 { 00:13:49.968 "method": "sock_set_default_impl", 00:13:49.968 "params": { 00:13:49.968 "impl_name": "uring" 00:13:49.968 } 00:13:49.968 }, 00:13:49.968 { 00:13:49.968 "method": "sock_impl_set_options", 00:13:49.968 "params": { 00:13:49.968 "impl_name": "ssl", 00:13:49.968 "recv_buf_size": 4096, 00:13:49.968 "send_buf_size": 4096, 00:13:49.968 "enable_recv_pipe": true, 00:13:49.968 "enable_quickack": false, 00:13:49.968 "enable_placement_id": 0, 00:13:49.968 "enable_zerocopy_send_server": true, 00:13:49.968 "enable_zerocopy_send_client": false, 00:13:49.968 "zerocopy_threshold": 0, 00:13:49.968 "tls_version": 0, 00:13:49.968 "enable_ktls": false 00:13:49.968 } 00:13:49.968 }, 00:13:49.968 { 00:13:49.968 "method": "sock_impl_set_options", 00:13:49.968 "params": { 00:13:49.968 "impl_name": "posix", 00:13:49.968 "recv_buf_size": 2097152, 00:13:49.968 "send_buf_size": 2097152, 00:13:49.968 "enable_recv_pipe": true, 00:13:49.968 "enable_quickack": false, 00:13:49.968 "enable_placement_id": 0, 00:13:49.968 "enable_zerocopy_send_server": true, 00:13:49.968 "enable_zerocopy_send_client": false, 00:13:49.968 "zerocopy_threshold": 0, 00:13:49.968 "tls_version": 0, 00:13:49.968 "enable_ktls": false 00:13:49.968 } 00:13:49.968 }, 00:13:49.968 { 00:13:49.969 "method": "sock_impl_set_options", 00:13:49.969 "params": { 00:13:49.969 "impl_name": "uring", 00:13:49.969 "recv_buf_size": 2097152, 00:13:49.969 "send_buf_size": 2097152, 00:13:49.969 "enable_recv_pipe": true, 00:13:49.969 "enable_quickack": false, 00:13:49.969 "enable_placement_id": 0, 00:13:49.969 "enable_zerocopy_send_server": false, 00:13:49.969 "enable_zerocopy_send_client": false, 00:13:49.969 "zerocopy_threshold": 0, 00:13:49.969 "tls_version": 0, 00:13:49.969 "enable_ktls": false 00:13:49.969 } 00:13:49.969 } 00:13:49.969 ] 00:13:49.969 }, 00:13:49.969 { 00:13:49.969 "subsystem": "vmd", 00:13:49.969 "config": [] 00:13:49.969 }, 00:13:49.969 { 00:13:49.969 "subsystem": "accel", 00:13:49.969 "config": [ 00:13:49.969 { 00:13:49.969 "method": "accel_set_options", 00:13:49.969 "params": { 00:13:49.969 "small_cache_size": 128, 00:13:49.969 "large_cache_size": 16, 00:13:49.969 "task_count": 2048, 00:13:49.969 "sequence_count": 2048, 00:13:49.969 "buf_count": 2048 00:13:49.969 } 00:13:49.969 } 00:13:49.969 ] 00:13:49.969 }, 00:13:49.969 { 00:13:49.969 "subsystem": "bdev", 00:13:49.969 "config": [ 00:13:49.969 { 00:13:49.969 "method": "bdev_set_options", 00:13:49.969 "params": { 00:13:49.969 "bdev_io_pool_size": 65535, 00:13:49.969 "bdev_io_cache_size": 256, 00:13:49.969 "bdev_auto_examine": true, 00:13:49.969 "iobuf_small_cache_size": 128, 00:13:49.969 "iobuf_large_cache_size": 16 00:13:49.969 } 00:13:49.969 }, 00:13:49.969 { 00:13:49.969 "method": "bdev_raid_set_options", 00:13:49.969 "params": { 00:13:49.969 "process_window_size_kb": 1024, 00:13:49.969 "process_max_bandwidth_mb_sec": 0 00:13:49.969 } 00:13:49.969 }, 00:13:49.969 { 00:13:49.969 "method": "bdev_iscsi_set_options", 00:13:49.969 "params": { 00:13:49.969 "timeout_sec": 30 00:13:49.969 } 00:13:49.969 }, 00:13:49.969 { 00:13:49.969 "method": "bdev_nvme_set_options", 00:13:49.969 "params": { 00:13:49.969 "action_on_timeout": "none", 00:13:49.969 "timeout_us": 0, 00:13:49.969 "timeout_admin_us": 0, 00:13:49.969 "keep_alive_timeout_ms": 10000, 00:13:49.969 "arbitration_burst": 0, 00:13:49.969 "low_priority_weight": 0, 00:13:49.969 "medium_priority_weight": 0, 00:13:49.969 "high_priority_weight": 0, 00:13:49.969 "nvme_adminq_poll_period_us": 10000, 00:13:49.969 "nvme_ioq_poll_period_us": 0, 00:13:49.969 "io_queue_requests": 512, 00:13:49.969 "delay_cmd_submit": true, 00:13:49.969 "transport_retry_count": 4, 00:13:49.969 "bdev_retry_count": 3, 00:13:49.969 "transport_ack_timeout": 0, 00:13:49.969 "ctrlr_loss_timeout_sec": 0, 00:13:49.969 "reconnect_delay_sec": 0, 00:13:49.969 "fast_io_fail_timeout_sec": 0, 00:13:49.969 "disable_auto_failback": false, 00:13:49.969 "generate_uuids": false, 00:13:49.969 "transport_tos": 0, 00:13:49.969 "nvme_error_stat": false, 00:13:49.969 "rdma_srq_size": 0, 00:13:49.969 "io_path_stat": false, 00:13:49.969 "allow_accel_sequence": false, 00:13:49.969 "rdma_max_cq_size": 0, 00:13:49.969 "rdma_cm_event_timeout_ms": 0, 00:13:49.969 "dhchap_digests": [ 00:13:49.969 "sha256", 00:13:49.969 "sha384", 00:13:49.969 "sha512" 00:13:49.969 ], 00:13:49.969 "dhchap_dhgroups": [ 00:13:49.969 "null", 00:13:49.969 "ffdhe2048", 00:13:49.969 "ffdhe3072", 00:13:49.969 "ffdhe4096", 00:13:49.969 "ffdhe6144", 00:13:49.969 "ffdhe8192" 00:13:49.969 ] 00:13:49.969 } 00:13:49.969 }, 00:13:49.969 { 00:13:49.969 "method": "bdev_nvme_attach_controller", 00:13:49.969 "params": { 00:13:49.969 "name": "TLSTEST", 00:13:49.969 "trtype": "TCP", 00:13:49.969 "adrfam": "IPv4", 00:13:49.969 "traddr": "10.0.0.3", 00:13:49.969 "trsvcid": "4420", 00:13:49.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:49.969 "prchk_reftag": false, 00:13:49.969 "prchk_guard": false, 00:13:49.969 "ctrlr_loss_timeout_sec": 0, 00:13:49.969 "reconnect_delay_sec": 0, 00:13:49.969 "fast_io_fail_timeout_sec": 0, 00:13:49.969 "psk": "key0", 00:13:49.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:49.969 "hdgst": false, 00:13:49.969 "ddgst": false, 00:13:49.969 "multipath": "multipath" 00:13:49.969 } 00:13:49.969 }, 00:13:49.969 { 00:13:49.969 "method": "bdev_nvme_set_hotplug", 00:13:49.969 "params": { 00:13:49.969 "period_us": 100000, 00:13:49.969 "enable": false 00:13:49.969 } 00:13:49.969 }, 00:13:49.969 { 00:13:49.969 "method": "bdev_wait_for_examine" 00:13:49.969 } 00:13:49.969 ] 00:13:49.969 }, 00:13:49.969 { 00:13:49.969 "subsystem": "nbd", 00:13:49.969 "config": [] 00:13:49.969 } 00:13:49.969 ] 00:13:49.969 }' 00:13:49.969 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72103 00:13:49.969 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72103 ']' 00:13:49.969 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72103 00:13:49.969 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:49.969 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.969 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72103 00:13:49.969 killing process with pid 72103 00:13:49.969 Received shutdown signal, test time was about 10.000000 seconds 00:13:49.969 00:13:49.969 Latency(us) 00:13:49.969 [2024-11-20T16:59:08.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.969 [2024-11-20T16:59:08.505Z] =================================================================================================================== 00:13:49.969 [2024-11-20T16:59:08.505Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:49.969 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:49.969 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:49.969 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72103' 00:13:49.969 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72103 00:13:49.969 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72103 00:13:50.228 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72054 00:13:50.228 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72054 ']' 00:13:50.228 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72054 00:13:50.228 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:50.228 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:50.228 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72054 00:13:50.228 killing process with pid 72054 00:13:50.228 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:50.228 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:50.228 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72054' 00:13:50.228 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72054 00:13:50.228 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72054 00:13:50.489 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:50.489 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:50.489 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:13:50.489 "subsystems": [ 00:13:50.489 { 00:13:50.489 "subsystem": "keyring", 00:13:50.489 "config": [ 00:13:50.489 { 00:13:50.489 "method": "keyring_file_add_key", 00:13:50.489 "params": { 00:13:50.489 "name": "key0", 00:13:50.489 "path": "/tmp/tmp.YwUiPy5WQg" 00:13:50.489 } 00:13:50.489 } 00:13:50.489 ] 00:13:50.489 }, 00:13:50.489 { 00:13:50.489 "subsystem": "iobuf", 00:13:50.489 "config": [ 00:13:50.489 { 00:13:50.489 "method": "iobuf_set_options", 00:13:50.489 "params": { 00:13:50.489 "small_pool_count": 8192, 00:13:50.489 "large_pool_count": 1024, 00:13:50.489 "small_bufsize": 8192, 00:13:50.489 "large_bufsize": 135168, 00:13:50.489 "enable_numa": false 00:13:50.489 } 00:13:50.489 } 00:13:50.489 ] 00:13:50.489 }, 00:13:50.489 { 00:13:50.489 "subsystem": "sock", 00:13:50.489 "config": [ 00:13:50.489 { 00:13:50.489 "method": "sock_set_default_impl", 00:13:50.489 "params": { 00:13:50.489 "impl_name": "uring" 00:13:50.489 } 00:13:50.489 }, 00:13:50.489 { 00:13:50.489 "method": "sock_impl_set_options", 00:13:50.489 "params": { 00:13:50.489 "impl_name": "ssl", 00:13:50.489 "recv_buf_size": 4096, 00:13:50.489 "send_buf_size": 4096, 00:13:50.489 "enable_recv_pipe": true, 00:13:50.489 "enable_quickack": false, 00:13:50.489 "enable_placement_id": 0, 00:13:50.489 "enable_zerocopy_send_server": true, 00:13:50.489 "enable_zerocopy_send_client": false, 00:13:50.489 "zerocopy_threshold": 0, 00:13:50.489 "tls_version": 0, 00:13:50.489 "enable_ktls": false 00:13:50.489 } 00:13:50.489 }, 00:13:50.489 { 00:13:50.489 "method": "sock_impl_set_options", 00:13:50.489 "params": { 00:13:50.489 "impl_name": "posix", 00:13:50.489 "recv_buf_size": 2097152, 00:13:50.489 "send_buf_size": 2097152, 00:13:50.489 "enable_recv_pipe": true, 00:13:50.489 "enable_quickack": false, 00:13:50.489 "enable_placement_id": 0, 00:13:50.489 "enable_zerocopy_send_server": true, 00:13:50.489 "enable_zerocopy_send_client": false, 00:13:50.489 "zerocopy_threshold": 0, 00:13:50.489 "tls_version": 0, 00:13:50.489 "enable_ktls": false 00:13:50.489 } 00:13:50.489 }, 00:13:50.489 { 00:13:50.489 "method": "sock_impl_set_options", 00:13:50.489 "params": { 00:13:50.489 "impl_name": "uring", 00:13:50.489 "recv_buf_size": 2097152, 00:13:50.489 "send_buf_size": 2097152, 00:13:50.489 "enable_recv_pipe": true, 00:13:50.489 "enable_quickack": false, 00:13:50.489 "enable_placement_id": 0, 00:13:50.489 "enable_zerocopy_send_server": false, 00:13:50.489 "enable_zerocopy_send_client": false, 00:13:50.489 "zerocopy_threshold": 0, 00:13:50.489 "tls_version": 0, 00:13:50.489 "enable_ktls": false 00:13:50.489 } 00:13:50.489 } 00:13:50.489 ] 00:13:50.489 }, 00:13:50.489 { 00:13:50.489 "subsystem": "vmd", 00:13:50.489 "config": [] 00:13:50.489 }, 00:13:50.489 { 00:13:50.489 "subsystem": "accel", 00:13:50.489 "config": [ 00:13:50.489 { 00:13:50.489 "method": "accel_set_options", 00:13:50.489 "params": { 00:13:50.489 "small_cache_size": 128, 00:13:50.489 "large_cache_size": 16, 00:13:50.489 "task_count": 2048, 00:13:50.489 "sequence_count": 2048, 00:13:50.489 "buf_count": 2048 00:13:50.489 } 00:13:50.489 } 00:13:50.489 ] 00:13:50.489 }, 00:13:50.489 { 00:13:50.489 "subsystem": "bdev", 00:13:50.489 "config": [ 00:13:50.489 { 00:13:50.489 "method": "bdev_set_options", 00:13:50.489 "params": { 00:13:50.489 "bdev_io_pool_size": 65535, 00:13:50.489 "bdev_io_cache_size": 256, 00:13:50.489 "bdev_auto_examine": true, 00:13:50.489 "iobuf_small_cache_size": 128, 00:13:50.489 "iobuf_large_cache_size": 16 00:13:50.489 } 00:13:50.489 }, 00:13:50.489 { 00:13:50.489 "method": "bdev_raid_set_options", 00:13:50.489 "params": { 00:13:50.489 "process_window_size_kb": 1024, 00:13:50.489 "process_max_bandwidth_mb_sec": 0 00:13:50.489 } 00:13:50.489 }, 00:13:50.489 { 00:13:50.489 "method": "bdev_iscsi_set_options", 00:13:50.489 "params": { 00:13:50.489 "timeout_sec": 30 00:13:50.489 } 00:13:50.489 }, 00:13:50.489 { 00:13:50.489 "method": "bdev_nvme_set_options", 00:13:50.489 "params": { 00:13:50.489 "action_on_timeout": "none", 00:13:50.489 "timeout_us": 0, 00:13:50.489 "timeout_admin_us": 0, 00:13:50.489 "keep_alive_timeout_ms": 10000, 00:13:50.489 "arbitration_burst": 0, 00:13:50.489 "low_priority_weight": 0, 00:13:50.489 "medium_priority_weight": 0, 00:13:50.489 "high_priority_weight": 0, 00:13:50.489 "nvme_adminq_poll_period_us": 10000, 00:13:50.489 "nvme_ioq_poll_period_us": 0, 00:13:50.489 "io_queue_requests": 0, 00:13:50.489 "delay_cmd_submit": true, 00:13:50.489 "transport_retry_count": 4, 00:13:50.489 "bdev_retry_count": 3, 00:13:50.489 "transport_ack_timeout": 0, 00:13:50.489 "ctrlr_loss_timeout_sec": 0, 00:13:50.489 "reconnect_delay_sec": 0, 00:13:50.489 "fast_io_fail_timeout_sec": 0, 00:13:50.489 "disable_auto_failback": false, 00:13:50.490 "generate_uuids": false, 00:13:50.490 "transport_tos": 0, 00:13:50.490 "nvme_error_stat": false, 00:13:50.490 "rdma_srq_size": 0, 00:13:50.490 "io_path_stat": false, 00:13:50.490 "allow_accel_sequence": false, 00:13:50.490 "rdma_max_cq_size": 0, 00:13:50.490 "rdma_cm_event_timeout_ms": 0, 00:13:50.490 "dhchap_digests": [ 00:13:50.490 "sha256", 00:13:50.490 "sha384", 00:13:50.490 "sha512" 00:13:50.490 ], 00:13:50.490 "dhchap_dhgroups": [ 00:13:50.490 "null", 00:13:50.490 "ffdhe2048", 00:13:50.490 "ffdhe3072", 00:13:50.490 "ffdhe4096", 00:13:50.490 "ffdhe6144", 00:13:50.490 "ffdhe8192" 00:13:50.490 ] 00:13:50.490 } 00:13:50.490 }, 00:13:50.490 { 00:13:50.490 "method": "bdev_nvme_set_hotplug", 00:13:50.490 "params": { 00:13:50.490 "period_us": 100000, 00:13:50.490 "enable": false 00:13:50.490 } 00:13:50.490 }, 00:13:50.490 { 00:13:50.490 "method": "bdev_malloc_create", 00:13:50.490 "params": { 00:13:50.490 "name": "malloc0", 00:13:50.490 "num_blocks": 8192, 00:13:50.490 "block_size": 4096, 00:13:50.490 "physical_block_size": 4096, 00:13:50.490 "uuid": "b6d7cea5-f9ca-40d4-af1f-74d6d529a10a", 00:13:50.490 "optimal_io_boundary": 0, 00:13:50.490 "md_size": 0, 00:13:50.490 "dif_type": 0, 00:13:50.490 "dif_is_head_of_md": false, 00:13:50.490 "dif_pi_format": 0 00:13:50.490 } 00:13:50.490 }, 00:13:50.490 { 00:13:50.490 "method": "bdev_wait_for_examine" 00:13:50.490 } 00:13:50.490 ] 00:13:50.490 }, 00:13:50.490 { 00:13:50.490 "subsystem": "nbd", 00:13:50.490 "config": [] 00:13:50.490 }, 00:13:50.490 { 00:13:50.490 "subsystem": "scheduler", 00:13:50.490 "config": [ 00:13:50.490 { 00:13:50.490 "method": "framework_set_scheduler", 00:13:50.490 "params": { 00:13:50.490 "name": "static" 00:13:50.490 } 00:13:50.490 } 00:13:50.490 ] 00:13:50.490 }, 00:13:50.490 { 00:13:50.490 "subsystem": "nvmf", 00:13:50.490 "config": [ 00:13:50.490 { 00:13:50.490 "method": "nvmf_set_config", 00:13:50.490 "params": { 00:13:50.490 "discovery_filter": "match_any", 00:13:50.490 "admin_cmd_passthru": { 00:13:50.490 "identify_ctrlr": false 00:13:50.490 }, 00:13:50.490 "dhchap_digests": [ 00:13:50.490 "sha256", 00:13:50.490 "sha384", 00:13:50.490 "sha512" 00:13:50.490 ], 00:13:50.490 "dhchap_dhgroups": [ 00:13:50.490 "null", 00:13:50.490 "ffdhe2048", 00:13:50.490 "ffdhe3072", 00:13:50.490 "ffdhe4096", 00:13:50.490 "ffdhe6144", 00:13:50.490 "ffdhe8192" 00:13:50.490 ] 00:13:50.490 } 00:13:50.490 }, 00:13:50.490 { 00:13:50.490 "method": "nvmf_set_max_subsystems", 00:13:50.490 "params": { 00:13:50.490 "max_subsystems": 1024 00:13:50.490 } 00:13:50.490 }, 00:13:50.490 { 00:13:50.490 "method": "nvmf_set_crdt", 00:13:50.490 "params": { 00:13:50.490 "crdt1": 0, 00:13:50.490 "crdt2": 0, 00:13:50.490 "crdt3": 0 00:13:50.490 } 00:13:50.490 }, 00:13:50.490 { 00:13:50.490 "method": "nvmf_create_transport", 00:13:50.490 "params": { 00:13:50.490 "trtype": "TCP", 00:13:50.490 "max_queue_depth": 128, 00:13:50.490 "max_io_qpairs_per_ctrlr": 127, 00:13:50.490 "in_capsule_data_size": 4096, 00:13:50.490 "max_io_size": 131072, 00:13:50.490 "io_unit_size": 131072, 00:13:50.490 "max_aq_depth": 128, 00:13:50.490 "num_shared_buffers": 511, 00:13:50.490 "buf_cache_size": 4294967295, 00:13:50.490 "dif_insert_or_strip": false, 00:13:50.490 "zcopy": false, 00:13:50.490 "c2h_success": false, 00:13:50.490 "sock_priority": 0, 00:13:50.490 "abort_timeout_sec": 1, 00:13:50.490 "ack_timeout": 0, 00:13:50.490 "data_wr_pool_size": 0 00:13:50.490 } 00:13:50.490 }, 00:13:50.490 { 00:13:50.490 "method": "nvmf_create_subsystem", 00:13:50.490 "params": { 00:13:50.490 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:50.490 "allow_any_host": false, 00:13:50.490 "serial_number": "SPDK00000000000001", 00:13:50.490 "model_number": "SPDK bdev Controller", 00:13:50.490 "max_namespaces": 10, 00:13:50.490 "min_cntlid": 1, 00:13:50.490 "max_cntlid": 65519, 00:13:50.490 "ana_reporting": false 00:13:50.490 } 00:13:50.490 }, 00:13:50.490 { 00:13:50.490 "method": "nvmf_subsystem_add_host", 00:13:50.490 "params": { 00:13:50.490 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:50.490 "host": "nqn.2016-06.io.spdk:host1", 00:13:50.490 "psk": "key0" 00:13:50.490 } 00:13:50.490 }, 00:13:50.490 { 00:13:50.490 "method": "nvmf_subsystem_add_ns", 00:13:50.490 "params": { 00:13:50.490 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:50.490 "namespace": { 00:13:50.490 "nsid": 1, 00:13:50.490 "bdev_name": "malloc0", 00:13:50.490 "nguid": "B6D7CEA5F9CA40D4AF1F74D6D529A10A", 00:13:50.490 "uuid": "b6d7cea5-f9ca-40d4-af1f-74d6d529a10a", 00:13:50.490 "no_auto_visible": false 00:13:50.490 } 00:13:50.490 } 00:13:50.490 }, 00:13:50.490 { 00:13:50.490 "method": "nvmf_subsystem_add_listener", 00:13:50.490 "params": { 00:13:50.490 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:50.490 "listen_address": { 00:13:50.490 "trtype": "TCP", 00:13:50.490 "adrfam": "IPv4", 00:13:50.490 "traddr": "10.0.0.3", 00:13:50.490 "trsvcid": "4420" 00:13:50.490 }, 00:13:50.490 "secure_channel": true 00:13:50.490 } 00:13:50.490 } 00:13:50.490 ] 00:13:50.490 } 00:13:50.490 ] 00:13:50.490 }' 00:13:50.490 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:50.490 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:50.490 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72157 00:13:50.490 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:50.490 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72157 00:13:50.490 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72157 ']' 00:13:50.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.490 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.490 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.490 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.490 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.490 16:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:50.490 [2024-11-20 16:59:08.851755] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:13:50.490 [2024-11-20 16:59:08.851855] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.490 [2024-11-20 16:59:08.999875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.750 [2024-11-20 16:59:09.047959] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.750 [2024-11-20 16:59:09.048270] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.750 [2024-11-20 16:59:09.048461] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.750 [2024-11-20 16:59:09.048583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.750 [2024-11-20 16:59:09.048616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.750 [2024-11-20 16:59:09.049121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.750 [2024-11-20 16:59:09.214751] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:51.008 [2024-11-20 16:59:09.291774] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:51.008 [2024-11-20 16:59:09.323730] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:51.008 [2024-11-20 16:59:09.323995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:51.267 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.267 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:51.267 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:51.267 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:51.267 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:51.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:51.528 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.528 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72192 00:13:51.528 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72192 /var/tmp/bdevperf.sock 00:13:51.528 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72192 ']' 00:13:51.528 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:51.528 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.528 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:51.528 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.528 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:51.528 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:51.528 16:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:13:51.528 "subsystems": [ 00:13:51.528 { 00:13:51.528 "subsystem": "keyring", 00:13:51.528 "config": [ 00:13:51.528 { 00:13:51.528 "method": "keyring_file_add_key", 00:13:51.528 "params": { 00:13:51.528 "name": "key0", 00:13:51.528 "path": "/tmp/tmp.YwUiPy5WQg" 00:13:51.528 } 00:13:51.528 } 00:13:51.528 ] 00:13:51.528 }, 00:13:51.528 { 00:13:51.528 "subsystem": "iobuf", 00:13:51.528 "config": [ 00:13:51.528 { 00:13:51.528 "method": "iobuf_set_options", 00:13:51.528 "params": { 00:13:51.528 "small_pool_count": 8192, 00:13:51.528 "large_pool_count": 1024, 00:13:51.528 "small_bufsize": 8192, 00:13:51.528 "large_bufsize": 135168, 00:13:51.528 "enable_numa": false 00:13:51.528 } 00:13:51.528 } 00:13:51.528 ] 00:13:51.528 }, 00:13:51.528 { 00:13:51.528 "subsystem": "sock", 00:13:51.528 "config": [ 00:13:51.528 { 00:13:51.528 "method": "sock_set_default_impl", 00:13:51.528 "params": { 00:13:51.528 "impl_name": "uring" 00:13:51.528 } 00:13:51.528 }, 00:13:51.528 { 00:13:51.528 "method": "sock_impl_set_options", 00:13:51.528 "params": { 00:13:51.528 "impl_name": "ssl", 00:13:51.528 "recv_buf_size": 4096, 00:13:51.528 "send_buf_size": 4096, 00:13:51.528 "enable_recv_pipe": true, 00:13:51.528 "enable_quickack": false, 00:13:51.528 "enable_placement_id": 0, 00:13:51.528 "enable_zerocopy_send_server": true, 00:13:51.528 "enable_zerocopy_send_client": false, 00:13:51.528 "zerocopy_threshold": 0, 00:13:51.528 "tls_version": 0, 00:13:51.528 "enable_ktls": false 00:13:51.528 } 00:13:51.528 }, 00:13:51.528 { 00:13:51.528 "method": "sock_impl_set_options", 00:13:51.528 "params": { 00:13:51.528 "impl_name": "posix", 00:13:51.528 "recv_buf_size": 2097152, 00:13:51.528 "send_buf_size": 2097152, 00:13:51.528 "enable_recv_pipe": true, 00:13:51.528 "enable_quickack": false, 00:13:51.528 "enable_placement_id": 0, 00:13:51.528 "enable_zerocopy_send_server": true, 00:13:51.528 "enable_zerocopy_send_client": false, 00:13:51.528 "zerocopy_threshold": 0, 00:13:51.528 "tls_version": 0, 00:13:51.528 "enable_ktls": false 00:13:51.528 } 00:13:51.528 }, 00:13:51.528 { 00:13:51.528 "method": "sock_impl_set_options", 00:13:51.528 "params": { 00:13:51.528 "impl_name": "uring", 00:13:51.528 "recv_buf_size": 2097152, 00:13:51.528 "send_buf_size": 2097152, 00:13:51.528 "enable_recv_pipe": true, 00:13:51.528 "enable_quickack": false, 00:13:51.528 "enable_placement_id": 0, 00:13:51.528 "enable_zerocopy_send_server": false, 00:13:51.528 "enable_zerocopy_send_client": false, 00:13:51.528 "zerocopy_threshold": 0, 00:13:51.528 "tls_version": 0, 00:13:51.528 "enable_ktls": false 00:13:51.528 } 00:13:51.528 } 00:13:51.528 ] 00:13:51.528 }, 00:13:51.528 { 00:13:51.528 "subsystem": "vmd", 00:13:51.528 "config": [] 00:13:51.528 }, 00:13:51.528 { 00:13:51.528 "subsystem": "accel", 00:13:51.528 "config": [ 00:13:51.528 { 00:13:51.528 "method": "accel_set_options", 00:13:51.528 "params": { 00:13:51.528 "small_cache_size": 128, 00:13:51.528 "large_cache_size": 16, 00:13:51.528 "task_count": 2048, 00:13:51.528 "sequence_count": 2048, 00:13:51.528 "buf_count": 2048 00:13:51.528 } 00:13:51.528 } 00:13:51.528 ] 00:13:51.528 }, 00:13:51.528 { 00:13:51.528 "subsystem": "bdev", 00:13:51.528 "config": [ 00:13:51.528 { 00:13:51.528 "method": "bdev_set_options", 00:13:51.528 "params": { 00:13:51.528 "bdev_io_pool_size": 65535, 00:13:51.528 "bdev_io_cache_size": 256, 00:13:51.528 "bdev_auto_examine": true, 00:13:51.528 "iobuf_small_cache_size": 128, 00:13:51.528 "iobuf_large_cache_size": 16 00:13:51.528 } 00:13:51.528 }, 00:13:51.528 { 00:13:51.528 "method": "bdev_raid_set_options", 00:13:51.528 "params": { 00:13:51.528 "process_window_size_kb": 1024, 00:13:51.528 "process_max_bandwidth_mb_sec": 0 00:13:51.528 } 00:13:51.528 }, 00:13:51.528 { 00:13:51.528 "method": "bdev_iscsi_set_options", 00:13:51.528 "params": { 00:13:51.529 "timeout_sec": 30 00:13:51.529 } 00:13:51.529 }, 00:13:51.529 { 00:13:51.529 "method": "bdev_nvme_set_options", 00:13:51.529 "params": { 00:13:51.529 "action_on_timeout": "none", 00:13:51.529 "timeout_us": 0, 00:13:51.529 "timeout_admin_us": 0, 00:13:51.529 "keep_alive_timeout_ms": 10000, 00:13:51.529 "arbitration_burst": 0, 00:13:51.529 "low_priority_weight": 0, 00:13:51.529 "medium_priority_weight": 0, 00:13:51.529 "high_priority_weight": 0, 00:13:51.529 "nvme_adminq_poll_period_us": 10000, 00:13:51.529 "nvme_ioq_poll_period_us": 0, 00:13:51.529 "io_queue_requests": 512, 00:13:51.529 "delay_cmd_submit": true, 00:13:51.529 "transport_retry_count": 4, 00:13:51.529 "bdev_retry_count": 3, 00:13:51.529 "transport_ack_timeout": 0, 00:13:51.529 "ctrlr_loss_timeout_sec": 0, 00:13:51.529 "reconnect_delay_sec": 0, 00:13:51.529 "fast_io_fail_timeout_sec": 0, 00:13:51.529 "disable_auto_failback": false, 00:13:51.529 "generate_uuids": false, 00:13:51.529 "transport_tos": 0, 00:13:51.529 "nvme_error_stat": false, 00:13:51.529 "rdma_srq_size": 0, 00:13:51.529 "io_path_stat": false, 00:13:51.529 "allow_accel_sequence": false, 00:13:51.529 "rdma_max_cq_size": 0, 00:13:51.529 "rdma_cm_event_timeout_ms": 0, 00:13:51.529 "dhchap_digests": [ 00:13:51.529 "sha256", 00:13:51.529 "sha384", 00:13:51.529 "sha512" 00:13:51.529 ], 00:13:51.529 "dhchap_dhgroups": [ 00:13:51.529 "null", 00:13:51.529 "ffdhe2048", 00:13:51.529 "ffdhe3072", 00:13:51.529 "ffdhe4096", 00:13:51.529 "ffdhe6144", 00:13:51.529 "ffdhe8192" 00:13:51.529 ] 00:13:51.529 } 00:13:51.529 }, 00:13:51.529 { 00:13:51.529 "method": "bdev_nvme_attach_controller", 00:13:51.529 "params": { 00:13:51.529 "name": "TLSTEST", 00:13:51.529 "trtype": "TCP", 00:13:51.529 "adrfam": "IPv4", 00:13:51.529 "traddr": "10.0.0.3", 00:13:51.529 "trsvcid": "4420", 00:13:51.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.529 "prchk_reftag": false, 00:13:51.529 "prchk_guard": false, 00:13:51.529 "ctrlr_loss_timeout_sec": 0, 00:13:51.529 "reconnect_delay_sec": 0, 00:13:51.529 "fast_io_fail_timeout_sec": 0, 00:13:51.529 "psk": "key0", 00:13:51.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:51.529 "hdgst": false, 00:13:51.529 "ddgst": false, 00:13:51.529 "multipath": "multipath" 00:13:51.529 } 00:13:51.529 }, 00:13:51.529 { 00:13:51.529 "method": "bdev_nvme_set_hotplug", 00:13:51.529 "params": { 00:13:51.529 "period_us": 100000, 00:13:51.529 "enable": false 00:13:51.529 } 00:13:51.529 }, 00:13:51.529 { 00:13:51.529 "method": "bdev_wait_for_examine" 00:13:51.529 } 00:13:51.529 ] 00:13:51.529 }, 00:13:51.529 { 00:13:51.529 "subsystem": "nbd", 00:13:51.529 "config": [] 00:13:51.529 } 00:13:51.529 ] 00:13:51.529 }' 00:13:51.529 [2024-11-20 16:59:09.856203] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:13:51.529 [2024-11-20 16:59:09.856459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72192 ] 00:13:51.529 [2024-11-20 16:59:09.998113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.529 [2024-11-20 16:59:10.043027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.789 [2024-11-20 16:59:10.175535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:51.789 [2024-11-20 16:59:10.223256] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:52.357 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.357 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:52.357 16:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:52.616 Running I/O for 10 seconds... 00:13:54.491 4410.00 IOPS, 17.23 MiB/s [2024-11-20T16:59:14.406Z] 4543.50 IOPS, 17.75 MiB/s [2024-11-20T16:59:15.344Z] 4480.00 IOPS, 17.50 MiB/s [2024-11-20T16:59:16.330Z] 4397.25 IOPS, 17.18 MiB/s [2024-11-20T16:59:17.267Z] 4394.20 IOPS, 17.16 MiB/s [2024-11-20T16:59:18.204Z] 4472.83 IOPS, 17.47 MiB/s [2024-11-20T16:59:19.141Z] 4527.00 IOPS, 17.68 MiB/s [2024-11-20T16:59:20.075Z] 4564.50 IOPS, 17.83 MiB/s [2024-11-20T16:59:21.453Z] 4598.89 IOPS, 17.96 MiB/s [2024-11-20T16:59:21.453Z] 4612.10 IOPS, 18.02 MiB/s 00:14:02.917 Latency(us) 00:14:02.917 [2024-11-20T16:59:21.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.917 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:02.917 Verification LBA range: start 0x0 length 0x2000 00:14:02.917 TLSTESTn1 : 10.02 4617.49 18.04 0.00 0.00 27673.95 3991.74 21924.77 00:14:02.917 [2024-11-20T16:59:21.453Z] =================================================================================================================== 00:14:02.917 [2024-11-20T16:59:21.453Z] Total : 4617.49 18.04 0.00 0.00 27673.95 3991.74 21924.77 00:14:02.917 { 00:14:02.917 "results": [ 00:14:02.917 { 00:14:02.917 "job": "TLSTESTn1", 00:14:02.917 "core_mask": "0x4", 00:14:02.917 "workload": "verify", 00:14:02.917 "status": "finished", 00:14:02.917 "verify_range": { 00:14:02.917 "start": 0, 00:14:02.917 "length": 8192 00:14:02.917 }, 00:14:02.917 "queue_depth": 128, 00:14:02.917 "io_size": 4096, 00:14:02.917 "runtime": 10.016057, 00:14:02.917 "iops": 4617.485703206362, 00:14:02.917 "mibps": 18.03705352814985, 00:14:02.917 "io_failed": 0, 00:14:02.917 "io_timeout": 0, 00:14:02.917 "avg_latency_us": 27673.94693970779, 00:14:02.917 "min_latency_us": 3991.7381818181816, 00:14:02.917 "max_latency_us": 21924.77090909091 00:14:02.917 } 00:14:02.917 ], 00:14:02.917 "core_count": 1 00:14:02.917 } 00:14:02.917 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:02.917 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72192 00:14:02.917 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72192 ']' 00:14:02.917 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72192 00:14:02.917 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:02.917 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.917 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72192 00:14:02.917 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:02.917 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:02.917 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72192' 00:14:02.917 killing process with pid 72192 00:14:02.917 Received shutdown signal, test time was about 10.000000 seconds 00:14:02.917 00:14:02.917 Latency(us) 00:14:02.917 [2024-11-20T16:59:21.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.917 [2024-11-20T16:59:21.453Z] =================================================================================================================== 00:14:02.917 [2024-11-20T16:59:21.453Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:02.917 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72192 00:14:02.917 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72192 00:14:02.917 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72157 00:14:02.917 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72157 ']' 00:14:02.917 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72157 00:14:02.917 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:02.917 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.917 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72157 00:14:02.917 killing process with pid 72157 00:14:02.917 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:02.917 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:02.917 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72157' 00:14:02.917 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72157 00:14:02.917 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72157 00:14:03.176 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:14:03.176 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:03.176 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:03.176 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:03.176 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72325 00:14:03.176 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:03.177 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72325 00:14:03.177 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72325 ']' 00:14:03.177 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.177 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:03.177 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.177 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:03.177 16:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:03.177 [2024-11-20 16:59:21.603562] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:14:03.177 [2024-11-20 16:59:21.603658] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.436 [2024-11-20 16:59:21.758145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.436 [2024-11-20 16:59:21.819525] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.436 [2024-11-20 16:59:21.819873] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.436 [2024-11-20 16:59:21.819910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.436 [2024-11-20 16:59:21.819921] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.436 [2024-11-20 16:59:21.819930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.436 [2024-11-20 16:59:21.820435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.436 [2024-11-20 16:59:21.880025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:04.374 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.374 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:04.374 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:04.374 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:04.374 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:04.374 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.374 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.YwUiPy5WQg 00:14:04.374 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.YwUiPy5WQg 00:14:04.374 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:04.374 [2024-11-20 16:59:22.870265] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.374 16:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:04.634 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:04.893 [2024-11-20 16:59:23.422375] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:04.893 [2024-11-20 16:59:23.422796] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:05.151 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:05.410 malloc0 00:14:05.410 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:05.671 16:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.YwUiPy5WQg 00:14:05.955 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:06.216 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72385 00:14:06.216 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:06.216 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:06.216 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72385 /var/tmp/bdevperf.sock 00:14:06.216 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72385 ']' 00:14:06.216 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:06.216 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:06.216 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:06.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:06.216 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:06.216 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:06.216 [2024-11-20 16:59:24.560181] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:14:06.216 [2024-11-20 16:59:24.560530] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72385 ] 00:14:06.216 [2024-11-20 16:59:24.704999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.475 [2024-11-20 16:59:24.755616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.475 [2024-11-20 16:59:24.815631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:06.476 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:06.476 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:06.476 16:59:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YwUiPy5WQg 00:14:06.735 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:06.994 [2024-11-20 16:59:25.420169] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:06.994 nvme0n1 00:14:06.994 16:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:07.253 Running I/O for 1 seconds... 00:14:08.188 3935.00 IOPS, 15.37 MiB/s 00:14:08.188 Latency(us) 00:14:08.188 [2024-11-20T16:59:26.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.188 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:08.188 Verification LBA range: start 0x0 length 0x2000 00:14:08.188 nvme0n1 : 1.03 3951.31 15.43 0.00 0.00 31955.59 7417.48 19779.96 00:14:08.188 [2024-11-20T16:59:26.724Z] =================================================================================================================== 00:14:08.188 [2024-11-20T16:59:26.724Z] Total : 3951.31 15.43 0.00 0.00 31955.59 7417.48 19779.96 00:14:08.188 { 00:14:08.188 "results": [ 00:14:08.188 { 00:14:08.188 "job": "nvme0n1", 00:14:08.188 "core_mask": "0x2", 00:14:08.188 "workload": "verify", 00:14:08.188 "status": "finished", 00:14:08.188 "verify_range": { 00:14:08.188 "start": 0, 00:14:08.188 "length": 8192 00:14:08.188 }, 00:14:08.188 "queue_depth": 128, 00:14:08.188 "io_size": 4096, 00:14:08.188 "runtime": 1.028267, 00:14:08.188 "iops": 3951.3083664067794, 00:14:08.188 "mibps": 15.434798306276482, 00:14:08.188 "io_failed": 0, 00:14:08.188 "io_timeout": 0, 00:14:08.188 "avg_latency_us": 31955.592909851657, 00:14:08.188 "min_latency_us": 7417.483636363636, 00:14:08.188 "max_latency_us": 19779.956363636364 00:14:08.188 } 00:14:08.188 ], 00:14:08.188 "core_count": 1 00:14:08.188 } 00:14:08.188 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72385 00:14:08.189 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72385 ']' 00:14:08.189 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72385 00:14:08.189 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:08.189 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:08.189 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72385 00:14:08.447 killing process with pid 72385 00:14:08.447 Received shutdown signal, test time was about 1.000000 seconds 00:14:08.447 00:14:08.447 Latency(us) 00:14:08.447 [2024-11-20T16:59:26.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.447 [2024-11-20T16:59:26.983Z] =================================================================================================================== 00:14:08.447 [2024-11-20T16:59:26.983Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:08.447 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:08.447 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:08.447 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72385' 00:14:08.447 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72385 00:14:08.447 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72385 00:14:08.447 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72325 00:14:08.447 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72325 ']' 00:14:08.447 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72325 00:14:08.447 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:08.447 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:08.447 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72325 00:14:08.447 killing process with pid 72325 00:14:08.447 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:08.447 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:08.447 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72325' 00:14:08.447 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72325 00:14:08.447 16:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72325 00:14:08.706 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:14:08.706 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:08.706 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:08.706 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.706 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72430 00:14:08.706 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72430 00:14:08.706 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:08.706 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72430 ']' 00:14:08.706 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.706 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.706 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.706 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.706 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.965 [2024-11-20 16:59:27.255849] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:14:08.965 [2024-11-20 16:59:27.255955] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.965 [2024-11-20 16:59:27.407019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.965 [2024-11-20 16:59:27.455537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.965 [2024-11-20 16:59:27.455836] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.965 [2024-11-20 16:59:27.455989] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.965 [2024-11-20 16:59:27.456201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.965 [2024-11-20 16:59:27.456319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.965 [2024-11-20 16:59:27.456815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.224 [2024-11-20 16:59:27.513347] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:09.224 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:09.224 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:09.224 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:09.224 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:09.224 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.224 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.224 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:14:09.224 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.224 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.224 [2024-11-20 16:59:27.631005] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.224 malloc0 00:14:09.224 [2024-11-20 16:59:27.662050] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:09.224 [2024-11-20 16:59:27.662257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:09.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:09.224 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.224 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72449 00:14:09.224 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:09.224 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72449 /var/tmp/bdevperf.sock 00:14:09.224 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72449 ']' 00:14:09.224 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:09.224 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.224 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:09.224 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.224 16:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.224 [2024-11-20 16:59:27.749950] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:14:09.224 [2024-11-20 16:59:27.750228] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72449 ] 00:14:09.483 [2024-11-20 16:59:27.905264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.483 [2024-11-20 16:59:27.972317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.743 [2024-11-20 16:59:28.035743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:10.310 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.310 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:10.310 16:59:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YwUiPy5WQg 00:14:10.569 16:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:10.828 [2024-11-20 16:59:29.264350] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:10.828 nvme0n1 00:14:10.828 16:59:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:11.086 Running I/O for 1 seconds... 00:14:12.023 3893.00 IOPS, 15.21 MiB/s 00:14:12.023 Latency(us) 00:14:12.023 [2024-11-20T16:59:30.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.023 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:12.023 Verification LBA range: start 0x0 length 0x2000 00:14:12.023 nvme0n1 : 1.02 3961.77 15.48 0.00 0.00 32070.35 3530.01 26452.71 00:14:12.023 [2024-11-20T16:59:30.559Z] =================================================================================================================== 00:14:12.023 [2024-11-20T16:59:30.559Z] Total : 3961.77 15.48 0.00 0.00 32070.35 3530.01 26452.71 00:14:12.023 { 00:14:12.023 "results": [ 00:14:12.023 { 00:14:12.023 "job": "nvme0n1", 00:14:12.023 "core_mask": "0x2", 00:14:12.023 "workload": "verify", 00:14:12.023 "status": "finished", 00:14:12.023 "verify_range": { 00:14:12.023 "start": 0, 00:14:12.023 "length": 8192 00:14:12.023 }, 00:14:12.023 "queue_depth": 128, 00:14:12.023 "io_size": 4096, 00:14:12.023 "runtime": 1.015204, 00:14:12.023 "iops": 3961.765320073601, 00:14:12.023 "mibps": 15.475645781537503, 00:14:12.023 "io_failed": 0, 00:14:12.023 "io_timeout": 0, 00:14:12.023 "avg_latency_us": 32070.348675014695, 00:14:12.023 "min_latency_us": 3530.0072727272727, 00:14:12.023 "max_latency_us": 26452.712727272727 00:14:12.023 } 00:14:12.023 ], 00:14:12.023 "core_count": 1 00:14:12.023 } 00:14:12.023 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:14:12.023 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.023 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:12.282 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.282 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:14:12.282 "subsystems": [ 00:14:12.282 { 00:14:12.282 "subsystem": "keyring", 00:14:12.282 "config": [ 00:14:12.282 { 00:14:12.282 "method": "keyring_file_add_key", 00:14:12.282 "params": { 00:14:12.282 "name": "key0", 00:14:12.282 "path": "/tmp/tmp.YwUiPy5WQg" 00:14:12.282 } 00:14:12.283 } 00:14:12.283 ] 00:14:12.283 }, 00:14:12.283 { 00:14:12.283 "subsystem": "iobuf", 00:14:12.283 "config": [ 00:14:12.283 { 00:14:12.283 "method": "iobuf_set_options", 00:14:12.283 "params": { 00:14:12.283 "small_pool_count": 8192, 00:14:12.283 "large_pool_count": 1024, 00:14:12.283 "small_bufsize": 8192, 00:14:12.283 "large_bufsize": 135168, 00:14:12.283 "enable_numa": false 00:14:12.283 } 00:14:12.283 } 00:14:12.283 ] 00:14:12.283 }, 00:14:12.283 { 00:14:12.283 "subsystem": "sock", 00:14:12.283 "config": [ 00:14:12.283 { 00:14:12.283 "method": "sock_set_default_impl", 00:14:12.283 "params": { 00:14:12.283 "impl_name": "uring" 00:14:12.283 } 00:14:12.283 }, 00:14:12.283 { 00:14:12.283 "method": "sock_impl_set_options", 00:14:12.283 "params": { 00:14:12.283 "impl_name": "ssl", 00:14:12.283 "recv_buf_size": 4096, 00:14:12.283 "send_buf_size": 4096, 00:14:12.283 "enable_recv_pipe": true, 00:14:12.283 "enable_quickack": false, 00:14:12.283 "enable_placement_id": 0, 00:14:12.283 "enable_zerocopy_send_server": true, 00:14:12.283 "enable_zerocopy_send_client": false, 00:14:12.283 "zerocopy_threshold": 0, 00:14:12.283 "tls_version": 0, 00:14:12.283 "enable_ktls": false 00:14:12.283 } 00:14:12.283 }, 00:14:12.283 { 00:14:12.283 "method": "sock_impl_set_options", 00:14:12.283 "params": { 00:14:12.283 "impl_name": "posix", 00:14:12.283 "recv_buf_size": 2097152, 00:14:12.283 "send_buf_size": 2097152, 00:14:12.283 "enable_recv_pipe": true, 00:14:12.283 "enable_quickack": false, 00:14:12.283 "enable_placement_id": 0, 00:14:12.283 "enable_zerocopy_send_server": true, 00:14:12.283 "enable_zerocopy_send_client": false, 00:14:12.283 "zerocopy_threshold": 0, 00:14:12.283 "tls_version": 0, 00:14:12.283 "enable_ktls": false 00:14:12.283 } 00:14:12.283 }, 00:14:12.283 { 00:14:12.283 "method": "sock_impl_set_options", 00:14:12.283 "params": { 00:14:12.283 "impl_name": "uring", 00:14:12.283 "recv_buf_size": 2097152, 00:14:12.283 "send_buf_size": 2097152, 00:14:12.283 "enable_recv_pipe": true, 00:14:12.283 "enable_quickack": false, 00:14:12.283 "enable_placement_id": 0, 00:14:12.283 "enable_zerocopy_send_server": false, 00:14:12.283 "enable_zerocopy_send_client": false, 00:14:12.283 "zerocopy_threshold": 0, 00:14:12.283 "tls_version": 0, 00:14:12.283 "enable_ktls": false 00:14:12.283 } 00:14:12.283 } 00:14:12.283 ] 00:14:12.283 }, 00:14:12.283 { 00:14:12.283 "subsystem": "vmd", 00:14:12.283 "config": [] 00:14:12.283 }, 00:14:12.283 { 00:14:12.283 "subsystem": "accel", 00:14:12.283 "config": [ 00:14:12.283 { 00:14:12.283 "method": "accel_set_options", 00:14:12.283 "params": { 00:14:12.283 "small_cache_size": 128, 00:14:12.283 "large_cache_size": 16, 00:14:12.283 "task_count": 2048, 00:14:12.283 "sequence_count": 2048, 00:14:12.283 "buf_count": 2048 00:14:12.283 } 00:14:12.283 } 00:14:12.283 ] 00:14:12.283 }, 00:14:12.283 { 00:14:12.283 "subsystem": "bdev", 00:14:12.283 "config": [ 00:14:12.283 { 00:14:12.283 "method": "bdev_set_options", 00:14:12.283 "params": { 00:14:12.283 "bdev_io_pool_size": 65535, 00:14:12.283 "bdev_io_cache_size": 256, 00:14:12.283 "bdev_auto_examine": true, 00:14:12.283 "iobuf_small_cache_size": 128, 00:14:12.283 "iobuf_large_cache_size": 16 00:14:12.283 } 00:14:12.283 }, 00:14:12.283 { 00:14:12.283 "method": "bdev_raid_set_options", 00:14:12.283 "params": { 00:14:12.283 "process_window_size_kb": 1024, 00:14:12.283 "process_max_bandwidth_mb_sec": 0 00:14:12.283 } 00:14:12.283 }, 00:14:12.283 { 00:14:12.283 "method": "bdev_iscsi_set_options", 00:14:12.283 "params": { 00:14:12.283 "timeout_sec": 30 00:14:12.283 } 00:14:12.283 }, 00:14:12.283 { 00:14:12.283 "method": "bdev_nvme_set_options", 00:14:12.283 "params": { 00:14:12.283 "action_on_timeout": "none", 00:14:12.283 "timeout_us": 0, 00:14:12.283 "timeout_admin_us": 0, 00:14:12.283 "keep_alive_timeout_ms": 10000, 00:14:12.283 "arbitration_burst": 0, 00:14:12.283 "low_priority_weight": 0, 00:14:12.283 "medium_priority_weight": 0, 00:14:12.283 "high_priority_weight": 0, 00:14:12.283 "nvme_adminq_poll_period_us": 10000, 00:14:12.283 "nvme_ioq_poll_period_us": 0, 00:14:12.283 "io_queue_requests": 0, 00:14:12.283 "delay_cmd_submit": true, 00:14:12.283 "transport_retry_count": 4, 00:14:12.283 "bdev_retry_count": 3, 00:14:12.283 "transport_ack_timeout": 0, 00:14:12.283 "ctrlr_loss_timeout_sec": 0, 00:14:12.283 "reconnect_delay_sec": 0, 00:14:12.283 "fast_io_fail_timeout_sec": 0, 00:14:12.283 "disable_auto_failback": false, 00:14:12.283 "generate_uuids": false, 00:14:12.283 "transport_tos": 0, 00:14:12.283 "nvme_error_stat": false, 00:14:12.283 "rdma_srq_size": 0, 00:14:12.283 "io_path_stat": false, 00:14:12.283 "allow_accel_sequence": false, 00:14:12.283 "rdma_max_cq_size": 0, 00:14:12.283 "rdma_cm_event_timeout_ms": 0, 00:14:12.283 "dhchap_digests": [ 00:14:12.283 "sha256", 00:14:12.283 "sha384", 00:14:12.283 "sha512" 00:14:12.283 ], 00:14:12.283 "dhchap_dhgroups": [ 00:14:12.283 "null", 00:14:12.283 "ffdhe2048", 00:14:12.283 "ffdhe3072", 00:14:12.283 "ffdhe4096", 00:14:12.283 "ffdhe6144", 00:14:12.283 "ffdhe8192" 00:14:12.283 ] 00:14:12.283 } 00:14:12.283 }, 00:14:12.283 { 00:14:12.283 "method": "bdev_nvme_set_hotplug", 00:14:12.283 "params": { 00:14:12.283 "period_us": 100000, 00:14:12.283 "enable": false 00:14:12.283 } 00:14:12.283 }, 00:14:12.283 { 00:14:12.283 "method": "bdev_malloc_create", 00:14:12.283 "params": { 00:14:12.283 "name": "malloc0", 00:14:12.283 "num_blocks": 8192, 00:14:12.283 "block_size": 4096, 00:14:12.283 "physical_block_size": 4096, 00:14:12.283 "uuid": "d5c28480-012e-488a-bd7b-1ba4c2345e15", 00:14:12.283 "optimal_io_boundary": 0, 00:14:12.283 "md_size": 0, 00:14:12.283 "dif_type": 0, 00:14:12.283 "dif_is_head_of_md": false, 00:14:12.283 "dif_pi_format": 0 00:14:12.283 } 00:14:12.283 }, 00:14:12.283 { 00:14:12.283 "method": "bdev_wait_for_examine" 00:14:12.283 } 00:14:12.283 ] 00:14:12.283 }, 00:14:12.283 { 00:14:12.283 "subsystem": "nbd", 00:14:12.283 "config": [] 00:14:12.283 }, 00:14:12.283 { 00:14:12.283 "subsystem": "scheduler", 00:14:12.283 "config": [ 00:14:12.283 { 00:14:12.283 "method": "framework_set_scheduler", 00:14:12.283 "params": { 00:14:12.283 "name": "static" 00:14:12.283 } 00:14:12.283 } 00:14:12.283 ] 00:14:12.283 }, 00:14:12.283 { 00:14:12.283 "subsystem": "nvmf", 00:14:12.283 "config": [ 00:14:12.283 { 00:14:12.283 "method": "nvmf_set_config", 00:14:12.283 "params": { 00:14:12.283 "discovery_filter": "match_any", 00:14:12.283 "admin_cmd_passthru": { 00:14:12.283 "identify_ctrlr": false 00:14:12.283 }, 00:14:12.283 "dhchap_digests": [ 00:14:12.283 "sha256", 00:14:12.283 "sha384", 00:14:12.283 "sha512" 00:14:12.283 ], 00:14:12.283 "dhchap_dhgroups": [ 00:14:12.283 "null", 00:14:12.283 "ffdhe2048", 00:14:12.283 "ffdhe3072", 00:14:12.283 "ffdhe4096", 00:14:12.283 "ffdhe6144", 00:14:12.283 "ffdhe8192" 00:14:12.283 ] 00:14:12.283 } 00:14:12.283 }, 00:14:12.283 { 00:14:12.283 "method": "nvmf_set_max_subsystems", 00:14:12.283 "params": { 00:14:12.283 "max_subsystems": 1024 00:14:12.283 } 00:14:12.283 }, 00:14:12.283 { 00:14:12.283 "method": "nvmf_set_crdt", 00:14:12.283 "params": { 00:14:12.283 "crdt1": 0, 00:14:12.283 "crdt2": 0, 00:14:12.283 "crdt3": 0 00:14:12.283 } 00:14:12.283 }, 00:14:12.283 { 00:14:12.283 "method": "nvmf_create_transport", 00:14:12.283 "params": { 00:14:12.283 "trtype": "TCP", 00:14:12.283 "max_queue_depth": 128, 00:14:12.283 "max_io_qpairs_per_ctrlr": 127, 00:14:12.283 "in_capsule_data_size": 4096, 00:14:12.283 "max_io_size": 131072, 00:14:12.283 "io_unit_size": 131072, 00:14:12.283 "max_aq_depth": 128, 00:14:12.283 "num_shared_buffers": 511, 00:14:12.283 "buf_cache_size": 4294967295, 00:14:12.283 "dif_insert_or_strip": false, 00:14:12.283 "zcopy": false, 00:14:12.283 "c2h_success": false, 00:14:12.283 "sock_priority": 0, 00:14:12.283 "abort_timeout_sec": 1, 00:14:12.283 "ack_timeout": 0, 00:14:12.283 "data_wr_pool_size": 0 00:14:12.283 } 00:14:12.283 }, 00:14:12.283 { 00:14:12.283 "method": "nvmf_create_subsystem", 00:14:12.283 "params": { 00:14:12.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.283 "allow_any_host": false, 00:14:12.283 "serial_number": "00000000000000000000", 00:14:12.283 "model_number": "SPDK bdev Controller", 00:14:12.283 "max_namespaces": 32, 00:14:12.283 "min_cntlid": 1, 00:14:12.283 "max_cntlid": 65519, 00:14:12.283 "ana_reporting": false 00:14:12.283 } 00:14:12.283 }, 00:14:12.283 { 00:14:12.283 "method": "nvmf_subsystem_add_host", 00:14:12.283 "params": { 00:14:12.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.284 "host": "nqn.2016-06.io.spdk:host1", 00:14:12.284 "psk": "key0" 00:14:12.284 } 00:14:12.284 }, 00:14:12.284 { 00:14:12.284 "method": "nvmf_subsystem_add_ns", 00:14:12.284 "params": { 00:14:12.284 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.284 "namespace": { 00:14:12.284 "nsid": 1, 00:14:12.284 "bdev_name": "malloc0", 00:14:12.284 "nguid": "D5C28480012E488ABD7B1BA4C2345E15", 00:14:12.284 "uuid": "d5c28480-012e-488a-bd7b-1ba4c2345e15", 00:14:12.284 "no_auto_visible": false 00:14:12.284 } 00:14:12.284 } 00:14:12.284 }, 00:14:12.284 { 00:14:12.284 "method": "nvmf_subsystem_add_listener", 00:14:12.284 "params": { 00:14:12.284 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.284 "listen_address": { 00:14:12.284 "trtype": "TCP", 00:14:12.284 "adrfam": "IPv4", 00:14:12.284 "traddr": "10.0.0.3", 00:14:12.284 "trsvcid": "4420" 00:14:12.284 }, 00:14:12.284 "secure_channel": false, 00:14:12.284 "sock_impl": "ssl" 00:14:12.284 } 00:14:12.284 } 00:14:12.284 ] 00:14:12.284 } 00:14:12.284 ] 00:14:12.284 }' 00:14:12.284 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:12.544 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:14:12.544 "subsystems": [ 00:14:12.544 { 00:14:12.544 "subsystem": "keyring", 00:14:12.544 "config": [ 00:14:12.544 { 00:14:12.544 "method": "keyring_file_add_key", 00:14:12.544 "params": { 00:14:12.544 "name": "key0", 00:14:12.544 "path": "/tmp/tmp.YwUiPy5WQg" 00:14:12.544 } 00:14:12.544 } 00:14:12.544 ] 00:14:12.544 }, 00:14:12.544 { 00:14:12.544 "subsystem": "iobuf", 00:14:12.544 "config": [ 00:14:12.544 { 00:14:12.544 "method": "iobuf_set_options", 00:14:12.544 "params": { 00:14:12.544 "small_pool_count": 8192, 00:14:12.544 "large_pool_count": 1024, 00:14:12.544 "small_bufsize": 8192, 00:14:12.544 "large_bufsize": 135168, 00:14:12.544 "enable_numa": false 00:14:12.544 } 00:14:12.544 } 00:14:12.544 ] 00:14:12.544 }, 00:14:12.544 { 00:14:12.544 "subsystem": "sock", 00:14:12.544 "config": [ 00:14:12.544 { 00:14:12.544 "method": "sock_set_default_impl", 00:14:12.544 "params": { 00:14:12.544 "impl_name": "uring" 00:14:12.544 } 00:14:12.544 }, 00:14:12.544 { 00:14:12.544 "method": "sock_impl_set_options", 00:14:12.544 "params": { 00:14:12.544 "impl_name": "ssl", 00:14:12.544 "recv_buf_size": 4096, 00:14:12.544 "send_buf_size": 4096, 00:14:12.544 "enable_recv_pipe": true, 00:14:12.544 "enable_quickack": false, 00:14:12.544 "enable_placement_id": 0, 00:14:12.544 "enable_zerocopy_send_server": true, 00:14:12.544 "enable_zerocopy_send_client": false, 00:14:12.544 "zerocopy_threshold": 0, 00:14:12.544 "tls_version": 0, 00:14:12.544 "enable_ktls": false 00:14:12.544 } 00:14:12.544 }, 00:14:12.544 { 00:14:12.544 "method": "sock_impl_set_options", 00:14:12.544 "params": { 00:14:12.544 "impl_name": "posix", 00:14:12.544 "recv_buf_size": 2097152, 00:14:12.544 "send_buf_size": 2097152, 00:14:12.544 "enable_recv_pipe": true, 00:14:12.544 "enable_quickack": false, 00:14:12.544 "enable_placement_id": 0, 00:14:12.544 "enable_zerocopy_send_server": true, 00:14:12.544 "enable_zerocopy_send_client": false, 00:14:12.544 "zerocopy_threshold": 0, 00:14:12.544 "tls_version": 0, 00:14:12.544 "enable_ktls": false 00:14:12.544 } 00:14:12.544 }, 00:14:12.544 { 00:14:12.544 "method": "sock_impl_set_options", 00:14:12.544 "params": { 00:14:12.544 "impl_name": "uring", 00:14:12.544 "recv_buf_size": 2097152, 00:14:12.544 "send_buf_size": 2097152, 00:14:12.544 "enable_recv_pipe": true, 00:14:12.544 "enable_quickack": false, 00:14:12.544 "enable_placement_id": 0, 00:14:12.544 "enable_zerocopy_send_server": false, 00:14:12.544 "enable_zerocopy_send_client": false, 00:14:12.544 "zerocopy_threshold": 0, 00:14:12.544 "tls_version": 0, 00:14:12.544 "enable_ktls": false 00:14:12.544 } 00:14:12.544 } 00:14:12.544 ] 00:14:12.544 }, 00:14:12.544 { 00:14:12.544 "subsystem": "vmd", 00:14:12.544 "config": [] 00:14:12.544 }, 00:14:12.544 { 00:14:12.544 "subsystem": "accel", 00:14:12.544 "config": [ 00:14:12.544 { 00:14:12.544 "method": "accel_set_options", 00:14:12.544 "params": { 00:14:12.544 "small_cache_size": 128, 00:14:12.544 "large_cache_size": 16, 00:14:12.544 "task_count": 2048, 00:14:12.544 "sequence_count": 2048, 00:14:12.544 "buf_count": 2048 00:14:12.544 } 00:14:12.544 } 00:14:12.544 ] 00:14:12.544 }, 00:14:12.544 { 00:14:12.544 "subsystem": "bdev", 00:14:12.544 "config": [ 00:14:12.544 { 00:14:12.544 "method": "bdev_set_options", 00:14:12.544 "params": { 00:14:12.544 "bdev_io_pool_size": 65535, 00:14:12.544 "bdev_io_cache_size": 256, 00:14:12.544 "bdev_auto_examine": true, 00:14:12.544 "iobuf_small_cache_size": 128, 00:14:12.544 "iobuf_large_cache_size": 16 00:14:12.544 } 00:14:12.544 }, 00:14:12.544 { 00:14:12.544 "method": "bdev_raid_set_options", 00:14:12.544 "params": { 00:14:12.544 "process_window_size_kb": 1024, 00:14:12.544 "process_max_bandwidth_mb_sec": 0 00:14:12.544 } 00:14:12.544 }, 00:14:12.544 { 00:14:12.544 "method": "bdev_iscsi_set_options", 00:14:12.544 "params": { 00:14:12.544 "timeout_sec": 30 00:14:12.544 } 00:14:12.544 }, 00:14:12.544 { 00:14:12.544 "method": "bdev_nvme_set_options", 00:14:12.544 "params": { 00:14:12.544 "action_on_timeout": "none", 00:14:12.544 "timeout_us": 0, 00:14:12.544 "timeout_admin_us": 0, 00:14:12.544 "keep_alive_timeout_ms": 10000, 00:14:12.544 "arbitration_burst": 0, 00:14:12.544 "low_priority_weight": 0, 00:14:12.544 "medium_priority_weight": 0, 00:14:12.544 "high_priority_weight": 0, 00:14:12.544 "nvme_adminq_poll_period_us": 10000, 00:14:12.544 "nvme_ioq_poll_period_us": 0, 00:14:12.544 "io_queue_requests": 512, 00:14:12.544 "delay_cmd_submit": true, 00:14:12.544 "transport_retry_count": 4, 00:14:12.544 "bdev_retry_count": 3, 00:14:12.544 "transport_ack_timeout": 0, 00:14:12.544 "ctrlr_loss_timeout_sec": 0, 00:14:12.544 "reconnect_delay_sec": 0, 00:14:12.544 "fast_io_fail_timeout_sec": 0, 00:14:12.544 "disable_auto_failback": false, 00:14:12.544 "generate_uuids": false, 00:14:12.544 "transport_tos": 0, 00:14:12.544 "nvme_error_stat": false, 00:14:12.544 "rdma_srq_size": 0, 00:14:12.544 "io_path_stat": false, 00:14:12.544 "allow_accel_sequence": false, 00:14:12.544 "rdma_max_cq_size": 0, 00:14:12.544 "rdma_cm_event_timeout_ms": 0, 00:14:12.544 "dhchap_digests": [ 00:14:12.544 "sha256", 00:14:12.544 "sha384", 00:14:12.544 "sha512" 00:14:12.544 ], 00:14:12.544 "dhchap_dhgroups": [ 00:14:12.544 "null", 00:14:12.544 "ffdhe2048", 00:14:12.544 "ffdhe3072", 00:14:12.544 "ffdhe4096", 00:14:12.544 "ffdhe6144", 00:14:12.544 "ffdhe8192" 00:14:12.544 ] 00:14:12.544 } 00:14:12.544 }, 00:14:12.544 { 00:14:12.544 "method": "bdev_nvme_attach_controller", 00:14:12.544 "params": { 00:14:12.544 "name": "nvme0", 00:14:12.544 "trtype": "TCP", 00:14:12.544 "adrfam": "IPv4", 00:14:12.545 "traddr": "10.0.0.3", 00:14:12.545 "trsvcid": "4420", 00:14:12.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.545 "prchk_reftag": false, 00:14:12.545 "prchk_guard": false, 00:14:12.545 "ctrlr_loss_timeout_sec": 0, 00:14:12.545 "reconnect_delay_sec": 0, 00:14:12.545 "fast_io_fail_timeout_sec": 0, 00:14:12.545 "psk": "key0", 00:14:12.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:12.545 "hdgst": false, 00:14:12.545 "ddgst": false, 00:14:12.545 "multipath": "multipath" 00:14:12.545 } 00:14:12.545 }, 00:14:12.545 { 00:14:12.545 "method": "bdev_nvme_set_hotplug", 00:14:12.545 "params": { 00:14:12.545 "period_us": 100000, 00:14:12.545 "enable": false 00:14:12.545 } 00:14:12.545 }, 00:14:12.545 { 00:14:12.545 "method": "bdev_enable_histogram", 00:14:12.545 "params": { 00:14:12.545 "name": "nvme0n1", 00:14:12.545 "enable": true 00:14:12.545 } 00:14:12.545 }, 00:14:12.545 { 00:14:12.545 "method": "bdev_wait_for_examine" 00:14:12.545 } 00:14:12.545 ] 00:14:12.545 }, 00:14:12.545 { 00:14:12.545 "subsystem": "nbd", 00:14:12.545 "config": [] 00:14:12.545 } 00:14:12.545 ] 00:14:12.545 }' 00:14:12.545 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72449 00:14:12.545 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72449 ']' 00:14:12.545 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72449 00:14:12.545 16:59:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:12.545 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.545 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72449 00:14:12.545 killing process with pid 72449 00:14:12.545 Received shutdown signal, test time was about 1.000000 seconds 00:14:12.545 00:14:12.545 Latency(us) 00:14:12.545 [2024-11-20T16:59:31.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.545 [2024-11-20T16:59:31.081Z] =================================================================================================================== 00:14:12.545 [2024-11-20T16:59:31.081Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:12.545 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:12.545 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:12.545 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72449' 00:14:12.545 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72449 00:14:12.545 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72449 00:14:12.805 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72430 00:14:12.805 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72430 ']' 00:14:12.805 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72430 00:14:12.805 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:12.805 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.805 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72430 00:14:12.805 killing process with pid 72430 00:14:12.805 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:12.805 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:12.805 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72430' 00:14:12.805 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72430 00:14:12.805 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72430 00:14:13.064 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:14:13.064 "subsystems": [ 00:14:13.064 { 00:14:13.064 "subsystem": "keyring", 00:14:13.064 "config": [ 00:14:13.064 { 00:14:13.064 "method": "keyring_file_add_key", 00:14:13.064 "params": { 00:14:13.064 "name": "key0", 00:14:13.064 "path": "/tmp/tmp.YwUiPy5WQg" 00:14:13.064 } 00:14:13.064 } 00:14:13.064 ] 00:14:13.064 }, 00:14:13.064 { 00:14:13.064 "subsystem": "iobuf", 00:14:13.064 "config": [ 00:14:13.064 { 00:14:13.064 "method": "iobuf_set_options", 00:14:13.064 "params": { 00:14:13.064 "small_pool_count": 8192, 00:14:13.064 "large_pool_count": 1024, 00:14:13.064 "small_bufsize": 8192, 00:14:13.064 "large_bufsize": 135168, 00:14:13.064 "enable_numa": false 00:14:13.064 } 00:14:13.064 } 00:14:13.064 ] 00:14:13.064 }, 00:14:13.064 { 00:14:13.064 "subsystem": "sock", 00:14:13.064 "config": [ 00:14:13.064 { 00:14:13.064 "method": "sock_set_default_impl", 00:14:13.064 "params": { 00:14:13.064 "impl_name": "uring" 00:14:13.064 } 00:14:13.064 }, 00:14:13.064 { 00:14:13.064 "method": "sock_impl_set_options", 00:14:13.064 "params": { 00:14:13.064 "impl_name": "ssl", 00:14:13.064 "recv_buf_size": 4096, 00:14:13.064 "send_buf_size": 4096, 00:14:13.064 "enable_recv_pipe": true, 00:14:13.064 "enable_quickack": false, 00:14:13.064 "enable_placement_id": 0, 00:14:13.064 "enable_zerocopy_send_server": true, 00:14:13.064 "enable_zerocopy_send_client": false, 00:14:13.064 "zerocopy_threshold": 0, 00:14:13.064 "tls_version": 0, 00:14:13.064 "enable_ktls": false 00:14:13.064 } 00:14:13.064 }, 00:14:13.064 { 00:14:13.064 "method": "sock_impl_set_options", 00:14:13.064 "params": { 00:14:13.064 "impl_name": "posix", 00:14:13.064 "recv_buf_size": 2097152, 00:14:13.064 "send_buf_size": 2097152, 00:14:13.064 "enable_recv_pipe": true, 00:14:13.064 "enable_quickack": false, 00:14:13.064 "enable_placement_id": 0, 00:14:13.064 "enable_zerocopy_send_server": true, 00:14:13.064 "enable_zerocopy_send_client": false, 00:14:13.064 "zerocopy_threshold": 0, 00:14:13.064 "tls_version": 0, 00:14:13.064 "enable_ktls": false 00:14:13.064 } 00:14:13.064 }, 00:14:13.064 { 00:14:13.064 "method": "sock_impl_set_options", 00:14:13.064 "params": { 00:14:13.064 "impl_name": "uring", 00:14:13.064 "recv_buf_size": 2097152, 00:14:13.064 "send_buf_size": 2097152, 00:14:13.064 "enable_recv_pipe": true, 00:14:13.064 "enable_quickack": false, 00:14:13.064 "enable_placement_id": 0, 00:14:13.064 "enable_zerocopy_send_server": false, 00:14:13.064 "enable_zerocopy_send_client": false, 00:14:13.064 "zerocopy_threshold": 0, 00:14:13.064 "tls_version": 0, 00:14:13.064 "enable_ktls": false 00:14:13.064 } 00:14:13.064 } 00:14:13.064 ] 00:14:13.064 }, 00:14:13.064 { 00:14:13.064 "subsystem": "vmd", 00:14:13.064 "config": [] 00:14:13.064 }, 00:14:13.064 { 00:14:13.064 "subsystem": "accel", 00:14:13.064 "config": [ 00:14:13.064 { 00:14:13.064 "method": "accel_set_options", 00:14:13.064 "params": { 00:14:13.064 "small_cache_size": 128, 00:14:13.064 "large_cache_size": 16, 00:14:13.064 "task_count": 2048, 00:14:13.064 "sequence_count": 2048, 00:14:13.064 "buf_count": 2048 00:14:13.064 } 00:14:13.064 } 00:14:13.064 ] 00:14:13.064 }, 00:14:13.064 { 00:14:13.064 "subsystem": "bdev", 00:14:13.064 "config": [ 00:14:13.064 { 00:14:13.064 "method": "bdev_set_options", 00:14:13.064 "params": { 00:14:13.064 "bdev_io_pool_size": 65535, 00:14:13.064 "bdev_io_cache_size": 256, 00:14:13.065 "bdev_auto_examine": true, 00:14:13.065 "iobuf_small_cache_size": 128, 00:14:13.065 "iobuf_large_cache_size": 16 00:14:13.065 } 00:14:13.065 }, 00:14:13.065 { 00:14:13.065 "method": "bdev_raid_set_options", 00:14:13.065 "params": { 00:14:13.065 "process_window_size_kb": 1024, 00:14:13.065 "process_max_bandwidth_mb_sec": 0 00:14:13.065 } 00:14:13.065 }, 00:14:13.065 { 00:14:13.065 "method": "bdev_iscsi_set_options", 00:14:13.065 "params": { 00:14:13.065 "timeout_sec": 30 00:14:13.065 } 00:14:13.065 }, 00:14:13.065 { 00:14:13.065 "method": "bdev_nvme_set_options", 00:14:13.065 "params": { 00:14:13.065 "action_on_timeout": "none", 00:14:13.065 "timeout_us": 0, 00:14:13.065 "timeout_admin_us": 0, 00:14:13.065 "keep_alive_timeout_ms": 10000, 00:14:13.065 "arbitration_burst": 0, 00:14:13.065 "low_priority_weight": 0, 00:14:13.065 "medium_priority_weight": 0, 00:14:13.065 "high_priority_weight": 0, 00:14:13.065 "nvme_adminq_poll_period_us": 10000, 00:14:13.065 "nvme_ioq_poll_period_us": 0, 00:14:13.065 "io_queue_requests": 0, 00:14:13.065 "delay_cmd_submit": true, 00:14:13.065 "transport_retry_count": 4, 00:14:13.065 "bdev_retry_count": 3, 00:14:13.065 "transport_ack_timeout": 0, 00:14:13.065 "ctrlr_loss_timeout_sec": 0, 00:14:13.065 "reconnect_delay_sec": 0, 00:14:13.065 "fast_io_fail_timeout_sec": 0, 00:14:13.065 "disable_auto_failback": false, 00:14:13.065 "generate_uuids": false, 00:14:13.065 "transport_tos": 0, 00:14:13.065 "nvme_error_stat": false, 00:14:13.065 "rdma_srq_size": 0, 00:14:13.065 "io_path_stat": false, 00:14:13.065 "allow_accel_sequence": false, 00:14:13.065 "rdma_max_cq_size": 0, 00:14:13.065 "rdma_cm_event_timeout_ms": 0, 00:14:13.065 "dhchap_digests": [ 00:14:13.065 "sha256", 00:14:13.065 "sha384", 00:14:13.065 "sha512" 00:14:13.065 ], 00:14:13.065 "dhchap_dhgroups": [ 00:14:13.065 "null", 00:14:13.065 "ffdhe2048", 00:14:13.065 "ffdhe3072", 00:14:13.065 "ffdhe4096", 00:14:13.065 "ffdhe6144", 00:14:13.065 "ffdhe8192" 00:14:13.065 ] 00:14:13.065 } 00:14:13.065 }, 00:14:13.065 { 00:14:13.065 "method": "bdev_nvme_set_hotplug", 00:14:13.065 "params": { 00:14:13.065 "period_us": 100000, 00:14:13.065 "enable": false 00:14:13.065 } 00:14:13.065 }, 00:14:13.065 { 00:14:13.065 "method": "bdev_malloc_create", 00:14:13.065 "params": { 00:14:13.065 "name": "malloc0", 00:14:13.065 "num_blocks": 8192, 00:14:13.065 "block_size": 4096, 00:14:13.065 "physical_block_size": 4096, 00:14:13.065 "uuid": "d5c28480-012e-488a-bd7b-1ba4c2345e15", 00:14:13.065 "optimal_io_boundary": 0, 00:14:13.065 "md_size": 0, 00:14:13.065 "dif_type": 0, 00:14:13.065 "dif_is_head_of_md": false, 00:14:13.065 "dif_pi_format": 0 00:14:13.065 } 00:14:13.065 }, 00:14:13.065 { 00:14:13.065 "method": "bdev_wait_for_examine" 00:14:13.065 } 00:14:13.065 ] 00:14:13.065 }, 00:14:13.065 { 00:14:13.065 "subsystem": "nbd", 00:14:13.065 "config": [] 00:14:13.065 }, 00:14:13.065 { 00:14:13.065 "subsystem": "scheduler", 00:14:13.065 "config": [ 00:14:13.065 { 00:14:13.065 "method": "framework_set_scheduler", 00:14:13.065 "params": { 00:14:13.065 "name": "static" 00:14:13.065 } 00:14:13.065 } 00:14:13.065 ] 00:14:13.065 }, 00:14:13.065 { 00:14:13.065 "subsystem": "nvmf", 00:14:13.065 "config": [ 00:14:13.065 { 00:14:13.065 "method": "nvmf_set_config", 00:14:13.065 "params": { 00:14:13.065 "discovery_filter": "match_any", 00:14:13.065 "admin_cmd_passthru": { 00:14:13.065 "identify_ctrlr": false 00:14:13.065 }, 00:14:13.065 "dhchap_digests": [ 00:14:13.065 "sha256", 00:14:13.065 "sha384", 00:14:13.065 "sha512" 00:14:13.065 ], 00:14:13.065 "dhchap_dhgroups": [ 00:14:13.065 "null", 00:14:13.065 "ffdhe2048", 00:14:13.065 "ffdhe3072", 00:14:13.065 "ffdhe4096", 00:14:13.065 "ffdhe6144", 00:14:13.065 "ffdhe8192" 00:14:13.065 ] 00:14:13.065 } 00:14:13.065 }, 00:14:13.065 { 00:14:13.065 "method": "nvmf_set_max_subsystems", 00:14:13.065 "params": { 00:14:13.065 "max_subsystems": 1024 00:14:13.065 } 00:14:13.065 }, 00:14:13.065 { 00:14:13.065 "method": "nvmf_set_crdt", 00:14:13.065 "params": { 00:14:13.065 "crdt1": 0, 00:14:13.065 "crdt2": 0, 00:14:13.065 "crdt3": 0 00:14:13.065 } 00:14:13.065 }, 00:14:13.065 { 00:14:13.065 "method": "nvmf_create_transport", 00:14:13.065 "params": { 00:14:13.065 "trtype": "TCP", 00:14:13.065 "max_queue_depth": 128, 00:14:13.065 "max_io_qpairs_per_ctrlr": 127, 00:14:13.065 "in_capsule_data_size": 4096, 00:14:13.065 "max_io_size": 131072, 00:14:13.065 "io_unit_size": 131072, 00:14:13.065 "max_aq_depth": 128, 00:14:13.065 "num_shared_buffers": 511, 00:14:13.065 "buf_cache_size": 4294967295, 00:14:13.065 "dif_insert_or_strip": false, 00:14:13.065 "zcopy": false, 00:14:13.065 "c2h_success": false, 00:14:13.065 "sock_priority": 0, 00:14:13.065 "abort_timeout_sec": 1, 00:14:13.065 "ack_timeout": 0, 00:14:13.065 "data_wr_pool_size": 0 00:14:13.065 } 00:14:13.065 }, 00:14:13.065 { 00:14:13.065 "method": "nvmf_create_subsystem", 00:14:13.065 "params": { 00:14:13.065 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.065 "allow_any_host": false, 00:14:13.065 "serial_number": "00000000000000000000", 00:14:13.065 "model_number": "SPDK bdev Controller", 00:14:13.065 "max_namespaces": 32, 00:14:13.065 "min_cntlid": 1, 00:14:13.065 "max_cntlid": 65519, 00:14:13.065 "ana_reporting": false 00:14:13.065 } 00:14:13.065 }, 00:14:13.065 { 00:14:13.065 "method": "nvmf_subsystem_add_host", 00:14:13.065 "params": { 00:14:13.065 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.065 "host": "nqn.2016-06.io.spdk:host1", 00:14:13.065 "psk": "key0" 00:14:13.065 } 00:14:13.065 }, 00:14:13.065 { 00:14:13.065 "method": "nvmf_subsystem_add_ns", 00:14:13.065 "params": { 00:14:13.065 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.065 "namespace": { 00:14:13.065 "nsid": 1, 00:14:13.065 "bdev_name": "malloc0", 00:14:13.065 "nguid": "D5C28480012E488ABD7B1BA4C2345E15", 00:14:13.065 "uuid": "d5c28480-012e-488a-bd7b-1ba4c2345e15", 00:14:13.065 "no_auto_visible": false 00:14:13.065 } 00:14:13.065 } 00:14:13.065 }, 00:14:13.065 { 00:14:13.065 "method": "nvmf_subsystem_add_listener", 00:14:13.065 "params": { 00:14:13.065 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.065 "listen_address": { 00:14:13.065 "trtype": "TCP", 00:14:13.065 "adrfam": "IPv4", 00:14:13.065 "traddr": "10.0.0.3", 00:14:13.065 "trsvcid": "4420" 00:14:13.065 }, 00:14:13.065 "secure_channel": false, 00:14:13.065 "sock_impl": "ssl" 00:14:13.065 } 00:14:13.065 } 00:14:13.065 ] 00:14:13.065 } 00:14:13.065 ] 00:14:13.065 }' 00:14:13.065 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:14:13.065 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:13.065 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:13.065 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.065 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72514 00:14:13.065 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:13.065 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72514 00:14:13.065 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72514 ']' 00:14:13.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.065 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.065 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:13.065 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.065 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:13.065 16:59:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.065 [2024-11-20 16:59:31.518814] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:14:13.065 [2024-11-20 16:59:31.518904] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.409 [2024-11-20 16:59:31.661457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.409 [2024-11-20 16:59:31.717256] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.409 [2024-11-20 16:59:31.717316] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.409 [2024-11-20 16:59:31.717344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.409 [2024-11-20 16:59:31.717352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.409 [2024-11-20 16:59:31.717358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.409 [2024-11-20 16:59:31.717802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.409 [2024-11-20 16:59:31.885397] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:13.712 [2024-11-20 16:59:31.964021] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.712 [2024-11-20 16:59:31.995976] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:13.712 [2024-11-20 16:59:31.996168] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:14.282 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.282 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:14.282 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:14.282 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:14.282 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.282 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.282 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72542 00:14:14.282 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72542 /var/tmp/bdevperf.sock 00:14:14.282 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72542 ']' 00:14:14.282 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:14.282 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:14:14.282 "subsystems": [ 00:14:14.282 { 00:14:14.282 "subsystem": "keyring", 00:14:14.282 "config": [ 00:14:14.282 { 00:14:14.282 "method": "keyring_file_add_key", 00:14:14.282 "params": { 00:14:14.282 "name": "key0", 00:14:14.282 "path": "/tmp/tmp.YwUiPy5WQg" 00:14:14.282 } 00:14:14.282 } 00:14:14.282 ] 00:14:14.282 }, 00:14:14.282 { 00:14:14.282 "subsystem": "iobuf", 00:14:14.282 "config": [ 00:14:14.282 { 00:14:14.282 "method": "iobuf_set_options", 00:14:14.282 "params": { 00:14:14.282 "small_pool_count": 8192, 00:14:14.282 "large_pool_count": 1024, 00:14:14.282 "small_bufsize": 8192, 00:14:14.282 "large_bufsize": 135168, 00:14:14.282 "enable_numa": false 00:14:14.282 } 00:14:14.282 } 00:14:14.282 ] 00:14:14.282 }, 00:14:14.282 { 00:14:14.282 "subsystem": "sock", 00:14:14.282 "config": [ 00:14:14.282 { 00:14:14.282 "method": "sock_set_default_impl", 00:14:14.282 "params": { 00:14:14.282 "impl_name": "uring" 00:14:14.282 } 00:14:14.283 }, 00:14:14.283 { 00:14:14.283 "method": "sock_impl_set_options", 00:14:14.283 "params": { 00:14:14.283 "impl_name": "ssl", 00:14:14.283 "recv_buf_size": 4096, 00:14:14.283 "send_buf_size": 4096, 00:14:14.283 "enable_recv_pipe": true, 00:14:14.283 "enable_quickack": false, 00:14:14.283 "enable_placement_id": 0, 00:14:14.283 "enable_zerocopy_send_server": true, 00:14:14.283 "enable_zerocopy_send_client": false, 00:14:14.283 "zerocopy_threshold": 0, 00:14:14.283 "tls_version": 0, 00:14:14.283 "enable_ktls": false 00:14:14.283 } 00:14:14.283 }, 00:14:14.283 { 00:14:14.283 "method": "sock_impl_set_options", 00:14:14.283 "params": { 00:14:14.283 "impl_name": "posix", 00:14:14.283 "recv_buf_size": 2097152, 00:14:14.283 "send_buf_size": 2097152, 00:14:14.283 "enable_recv_pipe": true, 00:14:14.283 "enable_quickack": false, 00:14:14.283 "enable_placement_id": 0, 00:14:14.283 "enable_zerocopy_send_server": true, 00:14:14.283 "enable_zerocopy_send_client": false, 00:14:14.283 "zerocopy_threshold": 0, 00:14:14.283 "tls_version": 0, 00:14:14.283 "enable_ktls": false 00:14:14.283 } 00:14:14.283 }, 00:14:14.283 { 00:14:14.283 "method": "sock_impl_set_options", 00:14:14.283 "params": { 00:14:14.283 "impl_name": "uring", 00:14:14.283 "recv_buf_size": 2097152, 00:14:14.283 "send_buf_size": 2097152, 00:14:14.283 "enable_recv_pipe": true, 00:14:14.283 "enable_quickack": false, 00:14:14.283 "enable_placement_id": 0, 00:14:14.283 "enable_zerocopy_send_server": false, 00:14:14.283 "enable_zerocopy_send_client": false, 00:14:14.283 "zerocopy_threshold": 0, 00:14:14.283 "tls_version": 0, 00:14:14.283 "enable_ktls": false 00:14:14.283 } 00:14:14.283 } 00:14:14.283 ] 00:14:14.283 }, 00:14:14.283 { 00:14:14.283 "subsystem": "vmd", 00:14:14.283 "config": [] 00:14:14.283 }, 00:14:14.283 { 00:14:14.283 "subsystem": "accel", 00:14:14.283 "config": [ 00:14:14.283 { 00:14:14.283 "method": "accel_set_options", 00:14:14.283 "params": { 00:14:14.283 "small_cache_size": 128, 00:14:14.283 "large_cache_size": 16, 00:14:14.283 "task_count": 2048, 00:14:14.283 "sequence_count": 2048, 00:14:14.283 "buf_count": 2048 00:14:14.283 } 00:14:14.283 } 00:14:14.283 ] 00:14:14.283 }, 00:14:14.283 { 00:14:14.283 "subsystem": "bdev", 00:14:14.283 "config": [ 00:14:14.283 { 00:14:14.283 "method": "bdev_set_options", 00:14:14.283 "params": { 00:14:14.283 "bdev_io_pool_size": 65535, 00:14:14.283 "bdev_io_cache_size": 256, 00:14:14.283 "bdev_auto_examine": true, 00:14:14.283 "iobuf_small_cache_size": 128, 00:14:14.283 "iobuf_large_cache_size": 16 00:14:14.283 } 00:14:14.283 }, 00:14:14.283 { 00:14:14.283 "method": "bdev_raid_set_options", 00:14:14.283 "params": { 00:14:14.283 "process_window_size_kb": 1024, 00:14:14.283 "process_max_bandwidth_mb_sec": 0 00:14:14.283 } 00:14:14.283 }, 00:14:14.283 { 00:14:14.283 "method": "bdev_iscsi_set_options", 00:14:14.283 "params": { 00:14:14.283 "timeout_sec": 30 00:14:14.283 } 00:14:14.283 }, 00:14:14.283 { 00:14:14.283 "method": "bdev_nvme_set_options", 00:14:14.283 "params": { 00:14:14.283 "action_on_timeout": "none", 00:14:14.283 "timeout_us": 0, 00:14:14.283 "timeout_admin_us": 0, 00:14:14.283 "keep_alive_timeout_ms": 10000, 00:14:14.283 "arbitration_burst": 0, 00:14:14.283 "low_priority_weight": 0, 00:14:14.283 "medium_priority_weight": 0, 00:14:14.283 "high_priority_weight": 0, 00:14:14.283 "nvme_adminq_poll_period_us": 10000, 00:14:14.283 "nvme_ioq_poll_period_us": 0, 00:14:14.283 "io_queue_requests": 512, 00:14:14.283 "delay_cmd_submit": true, 00:14:14.283 "transport_retry_count": 4, 00:14:14.283 "bdev_retry_count": 3, 00:14:14.283 "transport_ack_timeout": 0, 00:14:14.283 "ctrlr_loss_timeout_sec": 0, 00:14:14.283 "reconnect_delay_sec": 0, 00:14:14.283 "fast_io_fail_timeout_sec": 0, 00:14:14.283 "disable_auto_failback": false, 00:14:14.283 "generate_uuids": false, 00:14:14.283 "transport_tos": 0, 00:14:14.283 "nvme_error_stat": false, 00:14:14.283 "rdma_srq_size": 0, 00:14:14.283 "io_path_stat": false, 00:14:14.283 "allow_accel_sequence": false, 00:14:14.283 "rdma_max_cq_size": 0, 00:14:14.283 "rdma_cm_event_timeout_ms": 0, 00:14:14.283 "dhchap_digests": [ 00:14:14.283 "sha256", 00:14:14.283 "sha384", 00:14:14.283 "sha512" 00:14:14.283 ], 00:14:14.283 "dhchap_dhgroups": [ 00:14:14.283 "null", 00:14:14.283 "ffdhe2048", 00:14:14.283 "ffdhe3072", 00:14:14.283 "ffdhe4096", 00:14:14.283 "ffdhe6144", 00:14:14.283 "ffdhe8192" 00:14:14.283 ] 00:14:14.283 } 00:14:14.283 }, 00:14:14.283 { 00:14:14.283 "method": "bdev_nvme_attach_controller", 00:14:14.283 "params": { 00:14:14.283 "name": "nvme0", 00:14:14.283 "trtype": "TCP", 00:14:14.283 "adrfam": "IPv4", 00:14:14.283 "traddr": "10.0.0.3", 00:14:14.283 "trsvcid": "4420", 00:14:14.283 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:14.283 "prchk_reftag": false, 00:14:14.283 "prchk_guard": false, 00:14:14.283 "ctrlr_loss_timeout_sec": 0, 00:14:14.283 "reconnect_delay_sec": 0, 00:14:14.283 "fast_io_fail_timeout_sec": 0, 00:14:14.283 "psk": "key0", 00:14:14.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:14.283 "hdgst": false, 00:14:14.283 "ddgst": false, 00:14:14.283 "multipath": "multipath" 00:14:14.283 } 00:14:14.283 }, 00:14:14.283 { 00:14:14.283 "method": "bdev_nvme_set_hotplug", 00:14:14.283 "params": { 00:14:14.283 "period_us": 100000, 00:14:14.283 "enable": false 00:14:14.283 } 00:14:14.283 }, 00:14:14.283 { 00:14:14.283 "method": "bdev_enable_histogram", 00:14:14.283 "params": { 00:14:14.283 "name": "nvme0n1", 00:14:14.283 "enable": true 00:14:14.283 } 00:14:14.283 }, 00:14:14.283 { 00:14:14.283 "method": "bdev_wait_for_examine" 00:14:14.283 } 00:14:14.283 ] 00:14:14.283 }, 00:14:14.283 { 00:14:14.283 "subsystem": "nbd", 00:14:14.283 "config": [] 00:14:14.283 } 00:14:14.283 ] 00:14:14.283 }' 00:14:14.283 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.283 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.283 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.284 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.284 16:59:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.284 [2024-11-20 16:59:32.639266] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:14:14.284 [2024-11-20 16:59:32.639597] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72542 ] 00:14:14.284 [2024-11-20 16:59:32.795649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.542 [2024-11-20 16:59:32.855819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.542 [2024-11-20 16:59:32.998180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:14.542 [2024-11-20 16:59:33.050986] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:15.480 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:15.480 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:15.480 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:15.480 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:14:15.480 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.480 16:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:15.739 Running I/O for 1 seconds... 00:14:16.677 3927.00 IOPS, 15.34 MiB/s 00:14:16.677 Latency(us) 00:14:16.677 [2024-11-20T16:59:35.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.677 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:16.677 Verification LBA range: start 0x0 length 0x2000 00:14:16.677 nvme0n1 : 1.02 3990.72 15.59 0.00 0.00 31810.78 5064.15 27167.65 00:14:16.677 [2024-11-20T16:59:35.213Z] =================================================================================================================== 00:14:16.677 [2024-11-20T16:59:35.213Z] Total : 3990.72 15.59 0.00 0.00 31810.78 5064.15 27167.65 00:14:16.677 { 00:14:16.677 "results": [ 00:14:16.677 { 00:14:16.677 "job": "nvme0n1", 00:14:16.677 "core_mask": "0x2", 00:14:16.677 "workload": "verify", 00:14:16.677 "status": "finished", 00:14:16.677 "verify_range": { 00:14:16.677 "start": 0, 00:14:16.677 "length": 8192 00:14:16.677 }, 00:14:16.677 "queue_depth": 128, 00:14:16.677 "io_size": 4096, 00:14:16.677 "runtime": 1.016357, 00:14:16.677 "iops": 3990.723731917033, 00:14:16.677 "mibps": 15.58876457780091, 00:14:16.677 "io_failed": 0, 00:14:16.677 "io_timeout": 0, 00:14:16.677 "avg_latency_us": 31810.77941545634, 00:14:16.677 "min_latency_us": 5064.145454545454, 00:14:16.677 "max_latency_us": 27167.65090909091 00:14:16.677 } 00:14:16.677 ], 00:14:16.677 "core_count": 1 00:14:16.677 } 00:14:16.677 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:14:16.677 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:14:16.677 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:16.677 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:14:16.677 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:14:16.677 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:14:16.677 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:16.677 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:14:16.677 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:14:16.677 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:14:16.677 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:16.677 nvmf_trace.0 00:14:16.937 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:14:16.937 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72542 00:14:16.937 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72542 ']' 00:14:16.937 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72542 00:14:16.937 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:16.937 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.937 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72542 00:14:16.938 killing process with pid 72542 00:14:16.938 Received shutdown signal, test time was about 1.000000 seconds 00:14:16.938 00:14:16.938 Latency(us) 00:14:16.938 [2024-11-20T16:59:35.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.938 [2024-11-20T16:59:35.474Z] =================================================================================================================== 00:14:16.938 [2024-11-20T16:59:35.474Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:16.938 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:16.938 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:16.938 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72542' 00:14:16.938 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72542 00:14:16.938 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72542 00:14:17.197 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:17.197 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:17.197 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:14:17.197 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:17.197 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:14:17.197 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:17.197 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:17.197 rmmod nvme_tcp 00:14:17.197 rmmod nvme_fabrics 00:14:17.197 rmmod nvme_keyring 00:14:17.197 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:17.197 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:14:17.197 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:14:17.197 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72514 ']' 00:14:17.197 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72514 00:14:17.197 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72514 ']' 00:14:17.197 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72514 00:14:17.197 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:17.197 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:17.197 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72514 00:14:17.197 killing process with pid 72514 00:14:17.197 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:17.198 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:17.198 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72514' 00:14:17.198 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72514 00:14:17.198 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72514 00:14:17.457 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:17.457 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:17.457 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:17.457 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:14:17.457 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:14:17.457 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:17.457 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:14:17.457 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:17.457 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:17.457 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:17.457 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:17.457 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:17.457 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:17.457 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:17.457 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:17.457 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:17.457 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:17.457 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:17.457 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:17.457 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:17.716 16:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:17.716 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:17.716 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:17.716 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.716 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.716 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.716 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:14:17.716 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.3XBbl4goVw /tmp/tmp.Xg0ZBxVdI7 /tmp/tmp.YwUiPy5WQg 00:14:17.716 ************************************ 00:14:17.716 END TEST nvmf_tls 00:14:17.716 ************************************ 00:14:17.716 00:14:17.716 real 1m27.934s 00:14:17.716 user 2m24.027s 00:14:17.716 sys 0m27.278s 00:14:17.716 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:17.716 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.716 16:59:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:17.716 16:59:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:17.716 16:59:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:17.716 16:59:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:17.716 ************************************ 00:14:17.716 START TEST nvmf_fips 00:14:17.716 ************************************ 00:14:17.716 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:17.716 * Looking for test storage... 00:14:17.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:17.716 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:17.716 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:14:17.716 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:17.975 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:17.975 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:17.975 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:17.975 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:17.975 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:17.975 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:17.975 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:17.975 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:17.975 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:14:17.975 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:14:17.975 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:17.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.976 --rc genhtml_branch_coverage=1 00:14:17.976 --rc genhtml_function_coverage=1 00:14:17.976 --rc genhtml_legend=1 00:14:17.976 --rc geninfo_all_blocks=1 00:14:17.976 --rc geninfo_unexecuted_blocks=1 00:14:17.976 00:14:17.976 ' 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:17.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.976 --rc genhtml_branch_coverage=1 00:14:17.976 --rc genhtml_function_coverage=1 00:14:17.976 --rc genhtml_legend=1 00:14:17.976 --rc geninfo_all_blocks=1 00:14:17.976 --rc geninfo_unexecuted_blocks=1 00:14:17.976 00:14:17.976 ' 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:17.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.976 --rc genhtml_branch_coverage=1 00:14:17.976 --rc genhtml_function_coverage=1 00:14:17.976 --rc genhtml_legend=1 00:14:17.976 --rc geninfo_all_blocks=1 00:14:17.976 --rc geninfo_unexecuted_blocks=1 00:14:17.976 00:14:17.976 ' 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:17.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.976 --rc genhtml_branch_coverage=1 00:14:17.976 --rc genhtml_function_coverage=1 00:14:17.976 --rc genhtml_legend=1 00:14:17.976 --rc geninfo_all_blocks=1 00:14:17.976 --rc geninfo_unexecuted_blocks=1 00:14:17.976 00:14:17.976 ' 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:17.976 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:17.976 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:14:17.977 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:14:18.237 Error setting digest 00:14:18.237 401239FC127F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:14:18.237 401239FC127F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:18.237 Cannot find device "nvmf_init_br" 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:18.237 Cannot find device "nvmf_init_br2" 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:18.237 Cannot find device "nvmf_tgt_br" 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:18.237 Cannot find device "nvmf_tgt_br2" 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:18.237 Cannot find device "nvmf_init_br" 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:18.237 Cannot find device "nvmf_init_br2" 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:18.237 Cannot find device "nvmf_tgt_br" 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:18.237 Cannot find device "nvmf_tgt_br2" 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:18.237 Cannot find device "nvmf_br" 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:18.237 Cannot find device "nvmf_init_if" 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:18.237 Cannot find device "nvmf_init_if2" 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:14:18.237 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:18.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:18.238 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:14:18.238 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:18.238 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:18.238 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:14:18.238 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:18.238 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:18.238 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:18.238 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:18.238 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:18.238 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:18.238 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:18.497 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:18.497 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:18.497 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:14:18.498 00:14:18.498 --- 10.0.0.3 ping statistics --- 00:14:18.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.498 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:18.498 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:18.498 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.094 ms 00:14:18.498 00:14:18.498 --- 10.0.0.4 ping statistics --- 00:14:18.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.498 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:18.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:18.498 00:14:18.498 --- 10.0.0.1 ping statistics --- 00:14:18.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.498 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:18.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:14:18.498 00:14:18.498 --- 10.0.0.2 ping statistics --- 00:14:18.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.498 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:18.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72857 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72857 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72857 ']' 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:18.498 16:59:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:18.757 [2024-11-20 16:59:37.073666] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:14:18.757 [2024-11-20 16:59:37.073971] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.757 [2024-11-20 16:59:37.221883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.757 [2024-11-20 16:59:37.279242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.757 [2024-11-20 16:59:37.279304] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.757 [2024-11-20 16:59:37.279319] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.757 [2024-11-20 16:59:37.279329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.757 [2024-11-20 16:59:37.279338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.757 [2024-11-20 16:59:37.279781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.016 [2024-11-20 16:59:37.336150] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:19.016 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:19.016 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:14:19.016 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:19.016 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:19.016 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:19.016 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.016 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:14:19.016 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:19.016 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:14:19.016 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.W2v 00:14:19.016 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:19.016 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.W2v 00:14:19.016 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.W2v 00:14:19.016 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.W2v 00:14:19.016 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:19.275 [2024-11-20 16:59:37.760559] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.275 [2024-11-20 16:59:37.776486] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:19.275 [2024-11-20 16:59:37.776712] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:19.534 malloc0 00:14:19.534 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:19.534 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72891 00:14:19.534 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:19.534 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72891 /var/tmp/bdevperf.sock 00:14:19.534 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72891 ']' 00:14:19.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:19.534 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:19.534 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:19.534 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:19.534 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:19.534 16:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:19.534 [2024-11-20 16:59:37.930008] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:14:19.534 [2024-11-20 16:59:37.930139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72891 ] 00:14:19.793 [2024-11-20 16:59:38.080376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.793 [2024-11-20 16:59:38.142696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.793 [2024-11-20 16:59:38.199569] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:20.732 16:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:20.732 16:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:14:20.732 16:59:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.W2v 00:14:20.732 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:20.997 [2024-11-20 16:59:39.407642] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:20.997 TLSTESTn1 00:14:20.997 16:59:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:21.261 Running I/O for 10 seconds... 00:14:23.134 3404.00 IOPS, 13.30 MiB/s [2024-11-20T16:59:42.608Z] 3283.50 IOPS, 12.83 MiB/s [2024-11-20T16:59:43.987Z] 3411.67 IOPS, 13.33 MiB/s [2024-11-20T16:59:44.924Z] 3514.75 IOPS, 13.73 MiB/s [2024-11-20T16:59:45.861Z] 3582.20 IOPS, 13.99 MiB/s [2024-11-20T16:59:46.798Z] 3691.17 IOPS, 14.42 MiB/s [2024-11-20T16:59:47.735Z] 3795.43 IOPS, 14.83 MiB/s [2024-11-20T16:59:48.713Z] 3817.75 IOPS, 14.91 MiB/s [2024-11-20T16:59:49.650Z] 3830.44 IOPS, 14.96 MiB/s [2024-11-20T16:59:49.650Z] 3839.50 IOPS, 15.00 MiB/s 00:14:31.114 Latency(us) 00:14:31.114 [2024-11-20T16:59:49.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.114 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:31.114 Verification LBA range: start 0x0 length 0x2000 00:14:31.114 TLSTESTn1 : 10.02 3844.89 15.02 0.00 0.00 33229.63 6762.12 37176.79 00:14:31.114 [2024-11-20T16:59:49.651Z] =================================================================================================================== 00:14:31.115 [2024-11-20T16:59:49.651Z] Total : 3844.89 15.02 0.00 0.00 33229.63 6762.12 37176.79 00:14:31.115 { 00:14:31.115 "results": [ 00:14:31.115 { 00:14:31.115 "job": "TLSTESTn1", 00:14:31.115 "core_mask": "0x4", 00:14:31.115 "workload": "verify", 00:14:31.115 "status": "finished", 00:14:31.115 "verify_range": { 00:14:31.115 "start": 0, 00:14:31.115 "length": 8192 00:14:31.115 }, 00:14:31.115 "queue_depth": 128, 00:14:31.115 "io_size": 4096, 00:14:31.115 "runtime": 10.018497, 00:14:31.115 "iops": 3844.888110462078, 00:14:31.115 "mibps": 15.019094181492493, 00:14:31.115 "io_failed": 0, 00:14:31.115 "io_timeout": 0, 00:14:31.115 "avg_latency_us": 33229.62612140092, 00:14:31.115 "min_latency_us": 6762.123636363636, 00:14:31.115 "max_latency_us": 37176.785454545454 00:14:31.115 } 00:14:31.115 ], 00:14:31.115 "core_count": 1 00:14:31.115 } 00:14:31.115 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:31.115 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:31.115 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:14:31.115 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:14:31.115 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:14:31.115 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:31.374 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:14:31.374 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:14:31.374 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:14:31.374 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:31.374 nvmf_trace.0 00:14:31.374 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:14:31.374 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72891 00:14:31.374 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72891 ']' 00:14:31.374 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72891 00:14:31.374 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:14:31.374 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:31.374 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72891 00:14:31.374 killing process with pid 72891 00:14:31.374 Received shutdown signal, test time was about 10.000000 seconds 00:14:31.374 00:14:31.374 Latency(us) 00:14:31.374 [2024-11-20T16:59:49.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.374 [2024-11-20T16:59:49.910Z] =================================================================================================================== 00:14:31.374 [2024-11-20T16:59:49.910Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:31.374 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:31.374 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:31.374 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72891' 00:14:31.374 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72891 00:14:31.374 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72891 00:14:31.633 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:31.633 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:31.633 16:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:14:31.633 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:31.633 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:14:31.633 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:31.633 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:31.633 rmmod nvme_tcp 00:14:31.633 rmmod nvme_fabrics 00:14:31.633 rmmod nvme_keyring 00:14:31.633 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:31.633 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:14:31.633 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:14:31.633 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72857 ']' 00:14:31.633 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72857 00:14:31.633 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72857 ']' 00:14:31.633 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72857 00:14:31.633 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:14:31.633 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:31.633 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72857 00:14:31.633 killing process with pid 72857 00:14:31.633 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:31.633 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:31.633 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72857' 00:14:31.633 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72857 00:14:31.633 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72857 00:14:31.892 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:31.892 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:31.892 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:31.892 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:14:31.892 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:14:31.892 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:31.892 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:14:31.892 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:31.892 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:31.892 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:31.892 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:31.892 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:31.892 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:31.892 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:31.892 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:31.892 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:31.892 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:31.892 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:32.150 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:32.150 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:32.150 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:32.150 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:32.150 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:32.150 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.150 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:32.150 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.150 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:14:32.150 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.W2v 00:14:32.150 ************************************ 00:14:32.150 END TEST nvmf_fips 00:14:32.150 ************************************ 00:14:32.150 00:14:32.150 real 0m14.444s 00:14:32.150 user 0m20.192s 00:14:32.150 sys 0m5.732s 00:14:32.150 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.150 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:32.150 16:59:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:32.150 16:59:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:32.150 16:59:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.150 16:59:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:32.150 ************************************ 00:14:32.150 START TEST nvmf_control_msg_list 00:14:32.150 ************************************ 00:14:32.150 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:32.409 * Looking for test storage... 00:14:32.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:32.409 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:32.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.409 --rc genhtml_branch_coverage=1 00:14:32.409 --rc genhtml_function_coverage=1 00:14:32.409 --rc genhtml_legend=1 00:14:32.409 --rc geninfo_all_blocks=1 00:14:32.409 --rc geninfo_unexecuted_blocks=1 00:14:32.409 00:14:32.410 ' 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:32.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.410 --rc genhtml_branch_coverage=1 00:14:32.410 --rc genhtml_function_coverage=1 00:14:32.410 --rc genhtml_legend=1 00:14:32.410 --rc geninfo_all_blocks=1 00:14:32.410 --rc geninfo_unexecuted_blocks=1 00:14:32.410 00:14:32.410 ' 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:32.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.410 --rc genhtml_branch_coverage=1 00:14:32.410 --rc genhtml_function_coverage=1 00:14:32.410 --rc genhtml_legend=1 00:14:32.410 --rc geninfo_all_blocks=1 00:14:32.410 --rc geninfo_unexecuted_blocks=1 00:14:32.410 00:14:32.410 ' 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:32.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.410 --rc genhtml_branch_coverage=1 00:14:32.410 --rc genhtml_function_coverage=1 00:14:32.410 --rc genhtml_legend=1 00:14:32.410 --rc geninfo_all_blocks=1 00:14:32.410 --rc geninfo_unexecuted_blocks=1 00:14:32.410 00:14:32.410 ' 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:32.410 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:32.410 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:32.411 Cannot find device "nvmf_init_br" 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:32.411 Cannot find device "nvmf_init_br2" 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:32.411 Cannot find device "nvmf_tgt_br" 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:32.411 Cannot find device "nvmf_tgt_br2" 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:32.411 Cannot find device "nvmf_init_br" 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:32.411 Cannot find device "nvmf_init_br2" 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:32.411 Cannot find device "nvmf_tgt_br" 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:14:32.411 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:32.411 Cannot find device "nvmf_tgt_br2" 00:14:32.671 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:14:32.671 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:32.671 Cannot find device "nvmf_br" 00:14:32.671 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:14:32.671 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:32.671 Cannot find device "nvmf_init_if" 00:14:32.671 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:14:32.671 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:32.671 Cannot find device "nvmf_init_if2" 00:14:32.671 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:14:32.671 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:32.671 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:32.671 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:14:32.671 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:32.671 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:32.671 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:14:32.671 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:32.671 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:32.671 16:59:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:32.671 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:32.671 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:32.671 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:32.671 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:32.671 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:32.671 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:32.671 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:32.671 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:32.672 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:32.672 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:32.672 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:32.672 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:32.672 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:32.672 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:32.672 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:32.672 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:32.672 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:32.672 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:32.672 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:32.672 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:32.672 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:32.672 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:32.672 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:32.672 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:32.672 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:32.672 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:32.672 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:32.672 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:32.672 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:32.672 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:32.672 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:32.672 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:14:32.672 00:14:32.672 --- 10.0.0.3 ping statistics --- 00:14:32.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.672 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:32.672 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:32.672 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:32.672 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:14:32.672 00:14:32.672 --- 10.0.0.4 ping statistics --- 00:14:32.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.672 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:32.931 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:32.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:32.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:32.931 00:14:32.931 --- 10.0.0.1 ping statistics --- 00:14:32.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.931 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:32.931 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:32.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:32.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:14:32.931 00:14:32.931 --- 10.0.0.2 ping statistics --- 00:14:32.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.931 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:32.931 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:32.931 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:14:32.931 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:32.931 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:32.931 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:32.931 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:32.931 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:32.931 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:32.931 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:32.931 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:14:32.931 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:32.931 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:32.931 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:32.931 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73291 00:14:32.931 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73291 00:14:32.932 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:32.932 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 73291 ']' 00:14:32.932 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.932 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:32.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.932 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.932 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:32.932 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:32.932 [2024-11-20 16:59:51.298644] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:14:32.932 [2024-11-20 16:59:51.298716] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.932 [2024-11-20 16:59:51.447203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.191 [2024-11-20 16:59:51.518435] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.191 [2024-11-20 16:59:51.518503] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.191 [2024-11-20 16:59:51.518516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.191 [2024-11-20 16:59:51.518525] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.191 [2024-11-20 16:59:51.518533] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.191 [2024-11-20 16:59:51.519017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.191 [2024-11-20 16:59:51.586842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:33.191 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:33.191 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:14:33.191 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:33.191 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:33.191 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:33.191 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.191 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:33.191 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:33.191 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:14:33.191 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.191 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:33.191 [2024-11-20 16:59:51.702529] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:33.191 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.191 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:14:33.191 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.191 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:33.191 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.191 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:33.191 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.191 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:33.451 Malloc0 00:14:33.451 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.451 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:33.451 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.451 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:33.451 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.451 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:33.451 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.451 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:33.451 [2024-11-20 16:59:51.741960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:33.451 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.451 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73315 00:14:33.451 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:33.451 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73316 00:14:33.451 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:33.451 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73317 00:14:33.451 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:33.451 16:59:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73315 00:14:33.451 [2024-11-20 16:59:51.940751] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:33.451 [2024-11-20 16:59:51.941022] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:33.451 [2024-11-20 16:59:51.941201] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:34.831 Initializing NVMe Controllers 00:14:34.831 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:34.831 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:14:34.831 Initialization complete. Launching workers. 00:14:34.831 ======================================================== 00:14:34.831 Latency(us) 00:14:34.831 Device Information : IOPS MiB/s Average min max 00:14:34.831 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3652.00 14.27 273.48 221.86 812.27 00:14:34.831 ======================================================== 00:14:34.831 Total : 3652.00 14.27 273.48 221.86 812.27 00:14:34.831 00:14:34.831 Initializing NVMe Controllers 00:14:34.831 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:34.831 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:14:34.831 Initialization complete. Launching workers. 00:14:34.831 ======================================================== 00:14:34.831 Latency(us) 00:14:34.831 Device Information : IOPS MiB/s Average min max 00:14:34.831 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3652.99 14.27 273.28 197.14 464.57 00:14:34.831 ======================================================== 00:14:34.831 Total : 3652.99 14.27 273.28 197.14 464.57 00:14:34.831 00:14:34.831 Initializing NVMe Controllers 00:14:34.831 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:34.831 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:14:34.831 Initialization complete. Launching workers. 00:14:34.831 ======================================================== 00:14:34.831 Latency(us) 00:14:34.831 Device Information : IOPS MiB/s Average min max 00:14:34.831 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3650.00 14.26 273.60 156.73 532.93 00:14:34.831 ======================================================== 00:14:34.831 Total : 3650.00 14.26 273.60 156.73 532.93 00:14:34.831 00:14:34.831 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73316 00:14:34.831 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73317 00:14:34.831 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:34.831 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:14:34.831 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:34.831 16:59:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:34.831 rmmod nvme_tcp 00:14:34.831 rmmod nvme_fabrics 00:14:34.831 rmmod nvme_keyring 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73291 ']' 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73291 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 73291 ']' 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 73291 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73291 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:34.831 killing process with pid 73291 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73291' 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 73291 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 73291 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:34.831 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:35.091 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:35.091 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:35.091 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:35.091 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:35.091 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:35.091 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:35.091 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:35.091 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:35.091 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:35.091 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:35.091 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:35.091 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.091 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:35.091 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.091 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:14:35.091 00:14:35.091 real 0m2.958s 00:14:35.091 user 0m4.865s 00:14:35.091 sys 0m1.295s 00:14:35.091 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:35.091 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:35.091 ************************************ 00:14:35.091 END TEST nvmf_control_msg_list 00:14:35.091 ************************************ 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:35.352 ************************************ 00:14:35.352 START TEST nvmf_wait_for_buf 00:14:35.352 ************************************ 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:35.352 * Looking for test storage... 00:14:35.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:35.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.352 --rc genhtml_branch_coverage=1 00:14:35.352 --rc genhtml_function_coverage=1 00:14:35.352 --rc genhtml_legend=1 00:14:35.352 --rc geninfo_all_blocks=1 00:14:35.352 --rc geninfo_unexecuted_blocks=1 00:14:35.352 00:14:35.352 ' 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:35.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.352 --rc genhtml_branch_coverage=1 00:14:35.352 --rc genhtml_function_coverage=1 00:14:35.352 --rc genhtml_legend=1 00:14:35.352 --rc geninfo_all_blocks=1 00:14:35.352 --rc geninfo_unexecuted_blocks=1 00:14:35.352 00:14:35.352 ' 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:35.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.352 --rc genhtml_branch_coverage=1 00:14:35.352 --rc genhtml_function_coverage=1 00:14:35.352 --rc genhtml_legend=1 00:14:35.352 --rc geninfo_all_blocks=1 00:14:35.352 --rc geninfo_unexecuted_blocks=1 00:14:35.352 00:14:35.352 ' 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:35.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.352 --rc genhtml_branch_coverage=1 00:14:35.352 --rc genhtml_function_coverage=1 00:14:35.352 --rc genhtml_legend=1 00:14:35.352 --rc geninfo_all_blocks=1 00:14:35.352 --rc geninfo_unexecuted_blocks=1 00:14:35.352 00:14:35.352 ' 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.352 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:35.353 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:35.353 Cannot find device "nvmf_init_br" 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:35.353 Cannot find device "nvmf_init_br2" 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:35.353 Cannot find device "nvmf_tgt_br" 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:35.353 Cannot find device "nvmf_tgt_br2" 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:14:35.353 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:35.613 Cannot find device "nvmf_init_br" 00:14:35.613 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:14:35.613 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:35.613 Cannot find device "nvmf_init_br2" 00:14:35.613 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:14:35.613 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:35.613 Cannot find device "nvmf_tgt_br" 00:14:35.613 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:14:35.613 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:35.613 Cannot find device "nvmf_tgt_br2" 00:14:35.613 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:14:35.613 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:35.613 Cannot find device "nvmf_br" 00:14:35.613 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:14:35.613 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:35.613 Cannot find device "nvmf_init_if" 00:14:35.613 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:14:35.613 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:35.613 Cannot find device "nvmf_init_if2" 00:14:35.613 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:14:35.613 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:35.613 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:35.613 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:14:35.613 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:35.613 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:35.613 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:14:35.613 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:35.613 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:35.613 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:35.613 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:35.613 16:59:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:35.613 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:35.613 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:35.613 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:35.613 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:35.613 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:35.613 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:35.613 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:35.613 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:35.613 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:35.613 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:35.613 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:35.613 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:35.613 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:35.613 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:35.873 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:35.873 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:14:35.873 00:14:35.873 --- 10.0.0.3 ping statistics --- 00:14:35.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.873 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:35.873 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:35.873 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.095 ms 00:14:35.873 00:14:35.873 --- 10.0.0.4 ping statistics --- 00:14:35.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.873 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:35.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:14:35.873 00:14:35.873 --- 10.0.0.1 ping statistics --- 00:14:35.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.873 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:35.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:14:35.873 00:14:35.873 --- 10.0.0.2 ping statistics --- 00:14:35.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.873 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73548 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73548 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73548 ']' 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:35.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:35.873 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:35.873 [2024-11-20 16:59:54.346033] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:14:35.873 [2024-11-20 16:59:54.346133] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.133 [2024-11-20 16:59:54.496272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.133 [2024-11-20 16:59:54.555158] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.133 [2024-11-20 16:59:54.555237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.133 [2024-11-20 16:59:54.555264] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.133 [2024-11-20 16:59:54.555272] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.133 [2024-11-20 16:59:54.555279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.133 [2024-11-20 16:59:54.555676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.133 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.133 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:14:36.133 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:36.133 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:36.133 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:36.133 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.133 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:36.133 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:36.133 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:14:36.133 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.133 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:36.133 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.133 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:14:36.133 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.133 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:36.133 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.133 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:14:36.133 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.133 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:36.392 [2024-11-20 16:59:54.704524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:36.392 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.392 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:36.392 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.392 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:36.392 Malloc0 00:14:36.392 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.392 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:14:36.392 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.392 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:36.392 [2024-11-20 16:59:54.774257] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.392 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.392 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:14:36.392 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.392 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:36.392 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.392 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:36.392 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.392 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:36.392 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.392 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:36.392 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.392 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:36.392 [2024-11-20 16:59:54.802351] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:36.392 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.392 16:59:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:36.651 [2024-11-20 16:59:55.006943] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:38.028 Initializing NVMe Controllers 00:14:38.028 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:38.028 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:14:38.028 Initialization complete. Launching workers. 00:14:38.029 ======================================================== 00:14:38.029 Latency(us) 00:14:38.029 Device Information : IOPS MiB/s Average min max 00:14:38.029 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 506.00 63.25 7935.41 5045.12 10023.57 00:14:38.029 ======================================================== 00:14:38.029 Total : 506.00 63.25 7935.41 5045.12 10023.57 00:14:38.029 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4794 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4794 -eq 0 ]] 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:38.029 rmmod nvme_tcp 00:14:38.029 rmmod nvme_fabrics 00:14:38.029 rmmod nvme_keyring 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73548 ']' 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73548 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73548 ']' 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73548 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73548 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:38.029 killing process with pid 73548 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73548' 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73548 00:14:38.029 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73548 00:14:38.287 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:38.287 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:38.287 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:38.287 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:14:38.287 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:14:38.287 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:14:38.287 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:38.287 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:38.287 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:38.287 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:38.287 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:38.287 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:38.287 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:38.287 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:38.287 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:38.287 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:38.287 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:38.287 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:38.546 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:38.546 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:38.546 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:38.546 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:38.546 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:38.546 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.546 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:38.546 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.546 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:14:38.546 00:14:38.546 real 0m3.322s 00:14:38.546 user 0m2.575s 00:14:38.546 sys 0m0.808s 00:14:38.546 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:38.546 16:59:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:38.546 ************************************ 00:14:38.546 END TEST nvmf_wait_for_buf 00:14:38.546 ************************************ 00:14:38.546 16:59:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:14:38.546 16:59:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:14:38.546 16:59:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:14:38.546 16:59:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:38.546 16:59:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:38.546 16:59:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:38.547 ************************************ 00:14:38.547 START TEST nvmf_nsid 00:14:38.547 ************************************ 00:14:38.547 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:14:38.807 * Looking for test storage... 00:14:38.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:38.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.807 --rc genhtml_branch_coverage=1 00:14:38.807 --rc genhtml_function_coverage=1 00:14:38.807 --rc genhtml_legend=1 00:14:38.807 --rc geninfo_all_blocks=1 00:14:38.807 --rc geninfo_unexecuted_blocks=1 00:14:38.807 00:14:38.807 ' 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:38.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.807 --rc genhtml_branch_coverage=1 00:14:38.807 --rc genhtml_function_coverage=1 00:14:38.807 --rc genhtml_legend=1 00:14:38.807 --rc geninfo_all_blocks=1 00:14:38.807 --rc geninfo_unexecuted_blocks=1 00:14:38.807 00:14:38.807 ' 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:38.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.807 --rc genhtml_branch_coverage=1 00:14:38.807 --rc genhtml_function_coverage=1 00:14:38.807 --rc genhtml_legend=1 00:14:38.807 --rc geninfo_all_blocks=1 00:14:38.807 --rc geninfo_unexecuted_blocks=1 00:14:38.807 00:14:38.807 ' 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:38.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.807 --rc genhtml_branch_coverage=1 00:14:38.807 --rc genhtml_function_coverage=1 00:14:38.807 --rc genhtml_legend=1 00:14:38.807 --rc geninfo_all_blocks=1 00:14:38.807 --rc geninfo_unexecuted_blocks=1 00:14:38.807 00:14:38.807 ' 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.807 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:38.808 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:38.808 Cannot find device "nvmf_init_br" 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:38.808 Cannot find device "nvmf_init_br2" 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:38.808 Cannot find device "nvmf_tgt_br" 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:38.808 Cannot find device "nvmf_tgt_br2" 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:38.808 Cannot find device "nvmf_init_br" 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:38.808 Cannot find device "nvmf_init_br2" 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:38.808 Cannot find device "nvmf_tgt_br" 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:14:38.808 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:39.067 Cannot find device "nvmf_tgt_br2" 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:39.067 Cannot find device "nvmf_br" 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:39.067 Cannot find device "nvmf_init_if" 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:39.067 Cannot find device "nvmf_init_if2" 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:39.067 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:39.067 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:39.067 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:39.068 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:39.068 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:39.068 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:39.326 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:39.326 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:14:39.326 00:14:39.326 --- 10.0.0.3 ping statistics --- 00:14:39.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.326 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:39.326 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:39.326 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:14:39.326 00:14:39.326 --- 10.0.0.4 ping statistics --- 00:14:39.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.326 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:39.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:39.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:14:39.326 00:14:39.326 --- 10.0.0.1 ping statistics --- 00:14:39.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.326 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:39.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:39.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:14:39.326 00:14:39.326 --- 10.0.0.2 ping statistics --- 00:14:39.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.326 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73808 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73808 00:14:39.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73808 ']' 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:39.326 16:59:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:39.326 [2024-11-20 16:59:57.709203] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:14:39.326 [2024-11-20 16:59:57.709460] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.585 [2024-11-20 16:59:57.864021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.585 [2024-11-20 16:59:57.927316] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.585 [2024-11-20 16:59:57.927375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.585 [2024-11-20 16:59:57.927390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:39.585 [2024-11-20 16:59:57.927401] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:39.585 [2024-11-20 16:59:57.927410] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.585 [2024-11-20 16:59:57.927880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.585 [2024-11-20 16:59:57.987405] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:39.585 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:39.585 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:14:39.585 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:39.585 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:39.585 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:39.585 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.585 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:39.585 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73837 00:14:39.585 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:14:39.585 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:14:39.585 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:14:39.585 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:14:39.585 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:14:39.585 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:14:39.585 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:14:39.585 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:14:39.585 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:14:39.586 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:14:39.586 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:14:39.586 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:14:39.586 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:14:39.586 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:14:39.586 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:14:39.586 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=00042253-6477-418d-a68e-73f104fd4b7c 00:14:39.586 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:14:39.845 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=df0fe6d7-f891-451c-a9c3-e10fc62e4411 00:14:39.845 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:14:39.845 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=e7f6eadb-fea6-4632-8ad2-48e057ab9b31 00:14:39.845 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:14:39.845 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.845 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:39.845 null0 00:14:39.845 null1 00:14:39.845 null2 00:14:39.845 [2024-11-20 16:59:58.156727] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.845 [2024-11-20 16:59:58.159355] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:14:39.845 [2024-11-20 16:59:58.159430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73837 ] 00:14:39.845 [2024-11-20 16:59:58.180905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:39.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:14:39.845 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.845 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73837 /var/tmp/tgt2.sock 00:14:39.845 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73837 ']' 00:14:39.845 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:14:39.845 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:39.845 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:14:39.845 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:39.845 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:39.845 [2024-11-20 16:59:58.308731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.845 [2024-11-20 16:59:58.370210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.104 [2024-11-20 16:59:58.446134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:40.362 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:40.362 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:14:40.362 16:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:14:40.621 [2024-11-20 16:59:59.123134] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:40.621 [2024-11-20 16:59:59.139242] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:14:40.886 nvme0n1 nvme0n2 00:14:40.887 nvme1n1 00:14:40.887 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:14:40.887 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:14:40.887 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid=52629f53-148b-4998-bf25-3a0243614ae8 00:14:40.887 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:14:40.887 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:14:40.887 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:14:40.887 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:14:40.887 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:14:40.887 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:14:40.887 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:14:40.887 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:14:40.887 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:40.887 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:40.887 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:14:40.887 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:14:40.887 16:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:14:41.825 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:41.825 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:41.825 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:41.825 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:41.825 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:14:41.825 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 00042253-6477-418d-a68e-73f104fd4b7c 00:14:41.825 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:41.825 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:14:41.825 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=000422536477418da68e73f104fd4b7c 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 000422536477418DA68E73F104FD4B7C 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 000422536477418DA68E73F104FD4B7C == \0\0\0\4\2\2\5\3\6\4\7\7\4\1\8\D\A\6\8\E\7\3\F\1\0\4\F\D\4\B\7\C ]] 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid df0fe6d7-f891-451c-a9c3-e10fc62e4411 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=df0fe6d7f891451ca9c3e10fc62e4411 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DF0FE6D7F891451CA9C3E10FC62E4411 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ DF0FE6D7F891451CA9C3E10FC62E4411 == \D\F\0\F\E\6\D\7\F\8\9\1\4\5\1\C\A\9\C\3\E\1\0\F\C\6\2\E\4\4\1\1 ]] 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid e7f6eadb-fea6-4632-8ad2-48e057ab9b31 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e7f6eadbfea646328ad248e057ab9b31 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E7F6EADBFEA646328AD248E057AB9B31 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ E7F6EADBFEA646328AD248E057AB9B31 == \E\7\F\6\E\A\D\B\F\E\A\6\4\6\3\2\8\A\D\2\4\8\E\0\5\7\A\B\9\B\3\1 ]] 00:14:42.085 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:14:42.344 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:14:42.344 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:14:42.344 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73837 00:14:42.344 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73837 ']' 00:14:42.344 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73837 00:14:42.344 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:14:42.344 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.344 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73837 00:14:42.344 killing process with pid 73837 00:14:42.344 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:42.344 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:42.344 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73837' 00:14:42.344 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73837 00:14:42.344 17:00:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73837 00:14:42.912 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:14:42.912 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:42.912 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:14:42.912 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:42.912 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:14:42.912 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:42.912 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:42.912 rmmod nvme_tcp 00:14:42.912 rmmod nvme_fabrics 00:14:42.912 rmmod nvme_keyring 00:14:42.912 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:42.912 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:14:42.912 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:14:42.912 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73808 ']' 00:14:42.912 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73808 00:14:42.912 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73808 ']' 00:14:42.912 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73808 00:14:42.912 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:14:42.912 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.912 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73808 00:14:42.912 killing process with pid 73808 00:14:42.912 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:42.912 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:42.912 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73808' 00:14:42.912 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73808 00:14:42.912 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73808 00:14:43.171 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:43.171 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:43.171 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:43.171 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:14:43.171 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:43.171 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:14:43.171 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:14:43.171 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:43.171 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:43.171 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:43.171 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:43.171 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:43.171 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:43.171 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:43.171 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:43.171 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:43.171 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:43.171 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:43.171 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:43.171 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:43.171 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:43.430 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:43.430 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:43.430 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.430 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:43.430 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.430 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:14:43.430 ************************************ 00:14:43.430 END TEST nvmf_nsid 00:14:43.430 ************************************ 00:14:43.430 00:14:43.430 real 0m4.760s 00:14:43.430 user 0m7.101s 00:14:43.430 sys 0m1.681s 00:14:43.430 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.430 17:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:43.430 17:00:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:43.430 ************************************ 00:14:43.430 END TEST nvmf_target_extra 00:14:43.430 ************************************ 00:14:43.430 00:14:43.430 real 5m8.610s 00:14:43.430 user 10m46.887s 00:14:43.430 sys 1m8.433s 00:14:43.430 17:00:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.430 17:00:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:43.430 17:00:01 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:43.430 17:00:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:43.430 17:00:01 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.430 17:00:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:43.430 ************************************ 00:14:43.430 START TEST nvmf_host 00:14:43.430 ************************************ 00:14:43.430 17:00:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:43.430 * Looking for test storage... 00:14:43.430 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:43.430 17:00:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:43.430 17:00:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:14:43.430 17:00:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:43.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.690 --rc genhtml_branch_coverage=1 00:14:43.690 --rc genhtml_function_coverage=1 00:14:43.690 --rc genhtml_legend=1 00:14:43.690 --rc geninfo_all_blocks=1 00:14:43.690 --rc geninfo_unexecuted_blocks=1 00:14:43.690 00:14:43.690 ' 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:43.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.690 --rc genhtml_branch_coverage=1 00:14:43.690 --rc genhtml_function_coverage=1 00:14:43.690 --rc genhtml_legend=1 00:14:43.690 --rc geninfo_all_blocks=1 00:14:43.690 --rc geninfo_unexecuted_blocks=1 00:14:43.690 00:14:43.690 ' 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:43.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.690 --rc genhtml_branch_coverage=1 00:14:43.690 --rc genhtml_function_coverage=1 00:14:43.690 --rc genhtml_legend=1 00:14:43.690 --rc geninfo_all_blocks=1 00:14:43.690 --rc geninfo_unexecuted_blocks=1 00:14:43.690 00:14:43.690 ' 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:43.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.690 --rc genhtml_branch_coverage=1 00:14:43.690 --rc genhtml_function_coverage=1 00:14:43.690 --rc genhtml_legend=1 00:14:43.690 --rc geninfo_all_blocks=1 00:14:43.690 --rc geninfo_unexecuted_blocks=1 00:14:43.690 00:14:43.690 ' 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:43.690 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.690 17:00:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:43.690 ************************************ 00:14:43.690 START TEST nvmf_identify 00:14:43.690 ************************************ 00:14:43.691 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:43.691 * Looking for test storage... 00:14:43.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:43.691 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:43.691 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:43.691 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:43.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.950 --rc genhtml_branch_coverage=1 00:14:43.950 --rc genhtml_function_coverage=1 00:14:43.950 --rc genhtml_legend=1 00:14:43.950 --rc geninfo_all_blocks=1 00:14:43.950 --rc geninfo_unexecuted_blocks=1 00:14:43.950 00:14:43.950 ' 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:43.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.950 --rc genhtml_branch_coverage=1 00:14:43.950 --rc genhtml_function_coverage=1 00:14:43.950 --rc genhtml_legend=1 00:14:43.950 --rc geninfo_all_blocks=1 00:14:43.950 --rc geninfo_unexecuted_blocks=1 00:14:43.950 00:14:43.950 ' 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:43.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.950 --rc genhtml_branch_coverage=1 00:14:43.950 --rc genhtml_function_coverage=1 00:14:43.950 --rc genhtml_legend=1 00:14:43.950 --rc geninfo_all_blocks=1 00:14:43.950 --rc geninfo_unexecuted_blocks=1 00:14:43.950 00:14:43.950 ' 00:14:43.950 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:43.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.950 --rc genhtml_branch_coverage=1 00:14:43.950 --rc genhtml_function_coverage=1 00:14:43.950 --rc genhtml_legend=1 00:14:43.950 --rc geninfo_all_blocks=1 00:14:43.950 --rc geninfo_unexecuted_blocks=1 00:14:43.950 00:14:43.950 ' 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:43.951 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:43.951 Cannot find device "nvmf_init_br" 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:43.951 Cannot find device "nvmf_init_br2" 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:43.951 Cannot find device "nvmf_tgt_br" 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:43.951 Cannot find device "nvmf_tgt_br2" 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:43.951 Cannot find device "nvmf_init_br" 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:43.951 Cannot find device "nvmf_init_br2" 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:43.951 Cannot find device "nvmf_tgt_br" 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:43.951 Cannot find device "nvmf_tgt_br2" 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:43.951 Cannot find device "nvmf_br" 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:43.951 Cannot find device "nvmf_init_if" 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:43.951 Cannot find device "nvmf_init_if2" 00:14:43.951 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:14:43.952 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:43.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:43.952 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:14:43.952 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:43.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:43.952 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:14:43.952 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:43.952 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:44.242 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:44.242 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:44.242 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:44.242 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:44.243 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:44.243 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:14:44.243 00:14:44.243 --- 10.0.0.3 ping statistics --- 00:14:44.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.243 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:44.243 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:44.243 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:14:44.243 00:14:44.243 --- 10.0.0.4 ping statistics --- 00:14:44.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.243 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:44.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:44.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:14:44.243 00:14:44.243 --- 10.0.0.1 ping statistics --- 00:14:44.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.243 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:44.243 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:44.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:44.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:14:44.243 00:14:44.243 --- 10.0.0.2 ping statistics --- 00:14:44.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.243 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:44.509 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:44.509 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:14:44.509 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:44.509 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:44.509 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:44.509 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:44.509 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:44.509 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:44.509 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:44.509 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:44.509 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:44.509 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:44.509 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74188 00:14:44.509 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:44.509 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:44.509 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74188 00:14:44.509 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 74188 ']' 00:14:44.509 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.509 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.509 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.509 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.509 17:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:44.509 [2024-11-20 17:00:02.853386] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:14:44.509 [2024-11-20 17:00:02.853498] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.509 [2024-11-20 17:00:03.009305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:44.768 [2024-11-20 17:00:03.068964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.768 [2024-11-20 17:00:03.069034] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.768 [2024-11-20 17:00:03.069049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.768 [2024-11-20 17:00:03.069061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.768 [2024-11-20 17:00:03.069071] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.768 [2024-11-20 17:00:03.073866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.768 [2024-11-20 17:00:03.074028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.768 [2024-11-20 17:00:03.074169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:44.768 [2024-11-20 17:00:03.074175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.768 [2024-11-20 17:00:03.131675] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:44.768 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.768 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:14:44.768 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:44.768 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.768 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:44.768 [2024-11-20 17:00:03.212639] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.768 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.768 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:44.768 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:44.768 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:44.768 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:44.768 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.768 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:44.768 Malloc0 00:14:44.768 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.768 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:44.768 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.768 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:45.028 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.028 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:45.028 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.028 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:45.028 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.028 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:45.028 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.028 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:45.028 [2024-11-20 17:00:03.323927] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:45.028 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.028 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:45.028 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.028 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:45.028 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.028 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:45.028 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.028 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:45.028 [ 00:14:45.028 { 00:14:45.028 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:45.028 "subtype": "Discovery", 00:14:45.028 "listen_addresses": [ 00:14:45.028 { 00:14:45.028 "trtype": "TCP", 00:14:45.028 "adrfam": "IPv4", 00:14:45.028 "traddr": "10.0.0.3", 00:14:45.028 "trsvcid": "4420" 00:14:45.028 } 00:14:45.028 ], 00:14:45.028 "allow_any_host": true, 00:14:45.028 "hosts": [] 00:14:45.028 }, 00:14:45.028 { 00:14:45.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.029 "subtype": "NVMe", 00:14:45.029 "listen_addresses": [ 00:14:45.029 { 00:14:45.029 "trtype": "TCP", 00:14:45.029 "adrfam": "IPv4", 00:14:45.029 "traddr": "10.0.0.3", 00:14:45.029 "trsvcid": "4420" 00:14:45.029 } 00:14:45.029 ], 00:14:45.029 "allow_any_host": true, 00:14:45.029 "hosts": [], 00:14:45.029 "serial_number": "SPDK00000000000001", 00:14:45.029 "model_number": "SPDK bdev Controller", 00:14:45.029 "max_namespaces": 32, 00:14:45.029 "min_cntlid": 1, 00:14:45.029 "max_cntlid": 65519, 00:14:45.029 "namespaces": [ 00:14:45.029 { 00:14:45.029 "nsid": 1, 00:14:45.029 "bdev_name": "Malloc0", 00:14:45.029 "name": "Malloc0", 00:14:45.029 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:45.029 "eui64": "ABCDEF0123456789", 00:14:45.029 "uuid": "db8ab385-f510-4096-8cd9-ae4de90c5c05" 00:14:45.029 } 00:14:45.029 ] 00:14:45.029 } 00:14:45.029 ] 00:14:45.029 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.029 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:45.029 [2024-11-20 17:00:03.379368] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:14:45.029 [2024-11-20 17:00:03.379441] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74217 ] 00:14:45.029 [2024-11-20 17:00:03.536951] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:14:45.029 [2024-11-20 17:00:03.537023] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:45.029 [2024-11-20 17:00:03.537030] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:45.029 [2024-11-20 17:00:03.537048] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:45.029 [2024-11-20 17:00:03.537059] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:45.029 [2024-11-20 17:00:03.537451] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:14:45.029 [2024-11-20 17:00:03.537529] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x15b9750 0 00:14:45.029 [2024-11-20 17:00:03.544904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:45.029 [2024-11-20 17:00:03.544931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:45.029 [2024-11-20 17:00:03.544937] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:45.029 [2024-11-20 17:00:03.544941] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:45.029 [2024-11-20 17:00:03.544974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.029 [2024-11-20 17:00:03.544981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.029 [2024-11-20 17:00:03.544985] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b9750) 00:14:45.029 [2024-11-20 17:00:03.545000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:45.029 [2024-11-20 17:00:03.545034] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161d740, cid 0, qid 0 00:14:45.029 [2024-11-20 17:00:03.558934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.029 [2024-11-20 17:00:03.558958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.029 [2024-11-20 17:00:03.558963] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.029 [2024-11-20 17:00:03.558968] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161d740) on tqpair=0x15b9750 00:14:45.029 [2024-11-20 17:00:03.558980] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:45.029 [2024-11-20 17:00:03.558989] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:14:45.029 [2024-11-20 17:00:03.558995] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:14:45.029 [2024-11-20 17:00:03.559014] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.029 [2024-11-20 17:00:03.559019] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.029 [2024-11-20 17:00:03.559023] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b9750) 00:14:45.029 [2024-11-20 17:00:03.559039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.029 [2024-11-20 17:00:03.559068] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161d740, cid 0, qid 0 00:14:45.029 [2024-11-20 17:00:03.559146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.029 [2024-11-20 17:00:03.559153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.029 [2024-11-20 17:00:03.559157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.029 [2024-11-20 17:00:03.559161] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161d740) on tqpair=0x15b9750 00:14:45.029 [2024-11-20 17:00:03.559167] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:14:45.029 [2024-11-20 17:00:03.559190] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:14:45.029 [2024-11-20 17:00:03.559198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.029 [2024-11-20 17:00:03.559219] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.029 [2024-11-20 17:00:03.559223] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b9750) 00:14:45.029 [2024-11-20 17:00:03.559246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.029 [2024-11-20 17:00:03.559282] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161d740, cid 0, qid 0 00:14:45.029 [2024-11-20 17:00:03.559347] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.029 [2024-11-20 17:00:03.559354] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.029 [2024-11-20 17:00:03.559358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.029 [2024-11-20 17:00:03.559362] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161d740) on tqpair=0x15b9750 00:14:45.029 [2024-11-20 17:00:03.559369] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:14:45.029 [2024-11-20 17:00:03.559378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:14:45.029 [2024-11-20 17:00:03.559386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.029 [2024-11-20 17:00:03.559390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.029 [2024-11-20 17:00:03.559394] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b9750) 00:14:45.029 [2024-11-20 17:00:03.559402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.029 [2024-11-20 17:00:03.559420] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161d740, cid 0, qid 0 00:14:45.029 [2024-11-20 17:00:03.559478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.029 [2024-11-20 17:00:03.559485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.029 [2024-11-20 17:00:03.559489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.029 [2024-11-20 17:00:03.559493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161d740) on tqpair=0x15b9750 00:14:45.029 [2024-11-20 17:00:03.559499] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:45.029 [2024-11-20 17:00:03.559510] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.029 [2024-11-20 17:00:03.559515] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.029 [2024-11-20 17:00:03.559519] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b9750) 00:14:45.029 [2024-11-20 17:00:03.559527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.029 [2024-11-20 17:00:03.559545] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161d740, cid 0, qid 0 00:14:45.029 [2024-11-20 17:00:03.559601] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.029 [2024-11-20 17:00:03.559607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.029 [2024-11-20 17:00:03.559611] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.029 [2024-11-20 17:00:03.559615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161d740) on tqpair=0x15b9750 00:14:45.029 [2024-11-20 17:00:03.559621] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:14:45.029 [2024-11-20 17:00:03.559627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:14:45.029 [2024-11-20 17:00:03.559635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:45.029 [2024-11-20 17:00:03.559767] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:14:45.029 [2024-11-20 17:00:03.559773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:45.029 [2024-11-20 17:00:03.559783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.029 [2024-11-20 17:00:03.559787] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.029 [2024-11-20 17:00:03.559791] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b9750) 00:14:45.029 [2024-11-20 17:00:03.559799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.029 [2024-11-20 17:00:03.559819] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161d740, cid 0, qid 0 00:14:45.029 [2024-11-20 17:00:03.559886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.029 [2024-11-20 17:00:03.559893] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.029 [2024-11-20 17:00:03.559896] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.029 [2024-11-20 17:00:03.559900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161d740) on tqpair=0x15b9750 00:14:45.029 [2024-11-20 17:00:03.559906] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:45.029 [2024-11-20 17:00:03.559931] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.029 [2024-11-20 17:00:03.559938] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.029 [2024-11-20 17:00:03.559942] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b9750) 00:14:45.029 [2024-11-20 17:00:03.559950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.030 [2024-11-20 17:00:03.559971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161d740, cid 0, qid 0 00:14:45.030 [2024-11-20 17:00:03.560025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.030 [2024-11-20 17:00:03.560033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.030 [2024-11-20 17:00:03.560036] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.560041] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161d740) on tqpair=0x15b9750 00:14:45.030 [2024-11-20 17:00:03.560046] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:45.030 [2024-11-20 17:00:03.560051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:14:45.030 [2024-11-20 17:00:03.560060] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:14:45.030 [2024-11-20 17:00:03.560077] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:14:45.030 [2024-11-20 17:00:03.560089] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.560093] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b9750) 00:14:45.030 [2024-11-20 17:00:03.560102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.030 [2024-11-20 17:00:03.560123] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161d740, cid 0, qid 0 00:14:45.030 [2024-11-20 17:00:03.560249] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.030 [2024-11-20 17:00:03.560258] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.030 [2024-11-20 17:00:03.560262] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.560267] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b9750): datao=0, datal=4096, cccid=0 00:14:45.030 [2024-11-20 17:00:03.560272] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x161d740) on tqpair(0x15b9750): expected_datao=0, payload_size=4096 00:14:45.030 [2024-11-20 17:00:03.560277] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.560286] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.560291] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.560300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.030 [2024-11-20 17:00:03.560306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.030 [2024-11-20 17:00:03.560310] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.560314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161d740) on tqpair=0x15b9750 00:14:45.030 [2024-11-20 17:00:03.560324] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:14:45.030 [2024-11-20 17:00:03.560330] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:14:45.030 [2024-11-20 17:00:03.560334] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:14:45.030 [2024-11-20 17:00:03.560340] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:14:45.030 [2024-11-20 17:00:03.560345] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:14:45.030 [2024-11-20 17:00:03.560351] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:14:45.030 [2024-11-20 17:00:03.560366] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:14:45.030 [2024-11-20 17:00:03.560375] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.560380] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.560384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b9750) 00:14:45.030 [2024-11-20 17:00:03.560393] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:45.030 [2024-11-20 17:00:03.560414] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161d740, cid 0, qid 0 00:14:45.030 [2024-11-20 17:00:03.560478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.030 [2024-11-20 17:00:03.560485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.030 [2024-11-20 17:00:03.560489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.560493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161d740) on tqpair=0x15b9750 00:14:45.030 [2024-11-20 17:00:03.560502] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.560506] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.560510] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b9750) 00:14:45.030 [2024-11-20 17:00:03.560517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.030 [2024-11-20 17:00:03.560532] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.560536] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.560540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x15b9750) 00:14:45.030 [2024-11-20 17:00:03.560546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.030 [2024-11-20 17:00:03.560568] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.560572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.560576] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x15b9750) 00:14:45.030 [2024-11-20 17:00:03.560582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.030 [2024-11-20 17:00:03.560588] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.560592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.560596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b9750) 00:14:45.030 [2024-11-20 17:00:03.560601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.030 [2024-11-20 17:00:03.560607] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:45.030 [2024-11-20 17:00:03.560620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:45.030 [2024-11-20 17:00:03.560639] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.560643] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15b9750) 00:14:45.030 [2024-11-20 17:00:03.560650] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.030 [2024-11-20 17:00:03.560671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161d740, cid 0, qid 0 00:14:45.030 [2024-11-20 17:00:03.560679] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161d8c0, cid 1, qid 0 00:14:45.030 [2024-11-20 17:00:03.560684] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161da40, cid 2, qid 0 00:14:45.030 [2024-11-20 17:00:03.560689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161dbc0, cid 3, qid 0 00:14:45.030 [2024-11-20 17:00:03.560695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161dd40, cid 4, qid 0 00:14:45.030 [2024-11-20 17:00:03.560790] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.030 [2024-11-20 17:00:03.560797] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.030 [2024-11-20 17:00:03.560801] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.560805] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161dd40) on tqpair=0x15b9750 00:14:45.030 [2024-11-20 17:00:03.560811] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:14:45.030 [2024-11-20 17:00:03.560817] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:14:45.030 [2024-11-20 17:00:03.560829] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.560850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15b9750) 00:14:45.030 [2024-11-20 17:00:03.560870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.030 [2024-11-20 17:00:03.560894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161dd40, cid 4, qid 0 00:14:45.030 [2024-11-20 17:00:03.560967] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.030 [2024-11-20 17:00:03.560974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.030 [2024-11-20 17:00:03.560978] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.560982] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b9750): datao=0, datal=4096, cccid=4 00:14:45.030 [2024-11-20 17:00:03.560987] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x161dd40) on tqpair(0x15b9750): expected_datao=0, payload_size=4096 00:14:45.030 [2024-11-20 17:00:03.560992] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.561000] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.561004] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.561013] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.030 [2024-11-20 17:00:03.561019] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.030 [2024-11-20 17:00:03.561023] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.561027] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161dd40) on tqpair=0x15b9750 00:14:45.030 [2024-11-20 17:00:03.561047] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:14:45.030 [2024-11-20 17:00:03.561083] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.561090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15b9750) 00:14:45.030 [2024-11-20 17:00:03.561097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.030 [2024-11-20 17:00:03.561105] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.561110] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.030 [2024-11-20 17:00:03.561114] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15b9750) 00:14:45.030 [2024-11-20 17:00:03.561120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.031 [2024-11-20 17:00:03.561147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161dd40, cid 4, qid 0 00:14:45.031 [2024-11-20 17:00:03.561156] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161dec0, cid 5, qid 0 00:14:45.031 [2024-11-20 17:00:03.561329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.031 [2024-11-20 17:00:03.561353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.031 [2024-11-20 17:00:03.561358] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.031 [2024-11-20 17:00:03.561362] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b9750): datao=0, datal=1024, cccid=4 00:14:45.031 [2024-11-20 17:00:03.561367] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x161dd40) on tqpair(0x15b9750): expected_datao=0, payload_size=1024 00:14:45.031 [2024-11-20 17:00:03.561372] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.031 [2024-11-20 17:00:03.561379] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.031 [2024-11-20 17:00:03.561383] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.031 [2024-11-20 17:00:03.561390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.031 [2024-11-20 17:00:03.561396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.031 [2024-11-20 17:00:03.561400] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.031 [2024-11-20 17:00:03.561404] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161dec0) on tqpair=0x15b9750 00:14:45.031 [2024-11-20 17:00:03.561424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.031 [2024-11-20 17:00:03.561433] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.031 [2024-11-20 17:00:03.561436] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.031 [2024-11-20 17:00:03.561441] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161dd40) on tqpair=0x15b9750 00:14:45.031 [2024-11-20 17:00:03.561455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.031 [2024-11-20 17:00:03.561460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15b9750) 00:14:45.031 [2024-11-20 17:00:03.561468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.031 [2024-11-20 17:00:03.561494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161dd40, cid 4, qid 0 00:14:45.031 [2024-11-20 17:00:03.561579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.031 [2024-11-20 17:00:03.561591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.031 [2024-11-20 17:00:03.561596] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.031 [2024-11-20 17:00:03.561600] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b9750): datao=0, datal=3072, cccid=4 00:14:45.031 [2024-11-20 17:00:03.561605] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x161dd40) on tqpair(0x15b9750): expected_datao=0, payload_size=3072 00:14:45.031 [2024-11-20 17:00:03.561625] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.031 [2024-11-20 17:00:03.561632] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.031 [2024-11-20 17:00:03.561636] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.031 [2024-11-20 17:00:03.561645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.031 [2024-11-20 17:00:03.561651] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.031 [2024-11-20 17:00:03.561655] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.031 [2024-11-20 17:00:03.561659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161dd40) on tqpair=0x15b9750 00:14:45.031 [2024-11-20 17:00:03.561669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.031 [2024-11-20 17:00:03.561674] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15b9750) 00:14:45.031 [2024-11-20 17:00:03.561681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.031 [2024-11-20 17:00:03.561706] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161dd40, cid 4, qid 0 00:14:45.031 [2024-11-20 17:00:03.561777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.031 [2024-11-20 17:00:03.561783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.031 [2024-11-20 17:00:03.561787] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.031 [2024-11-20 17:00:03.561791] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b9750): datao=0, datal=8, cccid=4 00:14:45.031 [2024-11-20 17:00:03.561795] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x161dd40) on tqpair(0x15b9750): expected_datao=0, payload_size=8 00:14:45.031 [2024-11-20 17:00:03.561800] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.031 [2024-11-20 17:00:03.561807] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.031 [2024-11-20 17:00:03.561811] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.031 [2024-11-20 17:00:03.561827] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.031 [2024-11-20 17:00:03.561851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.031 [2024-11-20 17:00:03.561868] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.031 [2024-11-20 17:00:03.561873] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161dd40) on tqpair=0x15b9750 00:14:45.295 ===================================================== 00:14:45.295 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:45.295 ===================================================== 00:14:45.295 Controller Capabilities/Features 00:14:45.295 ================================ 00:14:45.295 Vendor ID: 0000 00:14:45.295 Subsystem Vendor ID: 0000 00:14:45.295 Serial Number: .................... 00:14:45.295 Model Number: ........................................ 00:14:45.295 Firmware Version: 25.01 00:14:45.295 Recommended Arb Burst: 0 00:14:45.295 IEEE OUI Identifier: 00 00 00 00:14:45.295 Multi-path I/O 00:14:45.295 May have multiple subsystem ports: No 00:14:45.295 May have multiple controllers: No 00:14:45.295 Associated with SR-IOV VF: No 00:14:45.295 Max Data Transfer Size: 131072 00:14:45.295 Max Number of Namespaces: 0 00:14:45.295 Max Number of I/O Queues: 1024 00:14:45.295 NVMe Specification Version (VS): 1.3 00:14:45.295 NVMe Specification Version (Identify): 1.3 00:14:45.295 Maximum Queue Entries: 128 00:14:45.295 Contiguous Queues Required: Yes 00:14:45.295 Arbitration Mechanisms Supported 00:14:45.295 Weighted Round Robin: Not Supported 00:14:45.295 Vendor Specific: Not Supported 00:14:45.295 Reset Timeout: 15000 ms 00:14:45.295 Doorbell Stride: 4 bytes 00:14:45.295 NVM Subsystem Reset: Not Supported 00:14:45.295 Command Sets Supported 00:14:45.295 NVM Command Set: Supported 00:14:45.295 Boot Partition: Not Supported 00:14:45.295 Memory Page Size Minimum: 4096 bytes 00:14:45.295 Memory Page Size Maximum: 4096 bytes 00:14:45.295 Persistent Memory Region: Not Supported 00:14:45.295 Optional Asynchronous Events Supported 00:14:45.295 Namespace Attribute Notices: Not Supported 00:14:45.295 Firmware Activation Notices: Not Supported 00:14:45.295 ANA Change Notices: Not Supported 00:14:45.295 PLE Aggregate Log Change Notices: Not Supported 00:14:45.295 LBA Status Info Alert Notices: Not Supported 00:14:45.295 EGE Aggregate Log Change Notices: Not Supported 00:14:45.295 Normal NVM Subsystem Shutdown event: Not Supported 00:14:45.295 Zone Descriptor Change Notices: Not Supported 00:14:45.295 Discovery Log Change Notices: Supported 00:14:45.295 Controller Attributes 00:14:45.295 128-bit Host Identifier: Not Supported 00:14:45.295 Non-Operational Permissive Mode: Not Supported 00:14:45.295 NVM Sets: Not Supported 00:14:45.295 Read Recovery Levels: Not Supported 00:14:45.295 Endurance Groups: Not Supported 00:14:45.295 Predictable Latency Mode: Not Supported 00:14:45.295 Traffic Based Keep ALive: Not Supported 00:14:45.295 Namespace Granularity: Not Supported 00:14:45.295 SQ Associations: Not Supported 00:14:45.295 UUID List: Not Supported 00:14:45.295 Multi-Domain Subsystem: Not Supported 00:14:45.295 Fixed Capacity Management: Not Supported 00:14:45.295 Variable Capacity Management: Not Supported 00:14:45.295 Delete Endurance Group: Not Supported 00:14:45.295 Delete NVM Set: Not Supported 00:14:45.295 Extended LBA Formats Supported: Not Supported 00:14:45.295 Flexible Data Placement Supported: Not Supported 00:14:45.295 00:14:45.296 Controller Memory Buffer Support 00:14:45.296 ================================ 00:14:45.296 Supported: No 00:14:45.296 00:14:45.296 Persistent Memory Region Support 00:14:45.296 ================================ 00:14:45.296 Supported: No 00:14:45.296 00:14:45.296 Admin Command Set Attributes 00:14:45.296 ============================ 00:14:45.296 Security Send/Receive: Not Supported 00:14:45.296 Format NVM: Not Supported 00:14:45.296 Firmware Activate/Download: Not Supported 00:14:45.296 Namespace Management: Not Supported 00:14:45.296 Device Self-Test: Not Supported 00:14:45.296 Directives: Not Supported 00:14:45.296 NVMe-MI: Not Supported 00:14:45.296 Virtualization Management: Not Supported 00:14:45.296 Doorbell Buffer Config: Not Supported 00:14:45.296 Get LBA Status Capability: Not Supported 00:14:45.296 Command & Feature Lockdown Capability: Not Supported 00:14:45.296 Abort Command Limit: 1 00:14:45.296 Async Event Request Limit: 4 00:14:45.296 Number of Firmware Slots: N/A 00:14:45.296 Firmware Slot 1 Read-Only: N/A 00:14:45.296 Firmware Activation Without Reset: N/A 00:14:45.296 Multiple Update Detection Support: N/A 00:14:45.296 Firmware Update Granularity: No Information Provided 00:14:45.296 Per-Namespace SMART Log: No 00:14:45.296 Asymmetric Namespace Access Log Page: Not Supported 00:14:45.296 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:45.296 Command Effects Log Page: Not Supported 00:14:45.296 Get Log Page Extended Data: Supported 00:14:45.296 Telemetry Log Pages: Not Supported 00:14:45.296 Persistent Event Log Pages: Not Supported 00:14:45.296 Supported Log Pages Log Page: May Support 00:14:45.296 Commands Supported & Effects Log Page: Not Supported 00:14:45.296 Feature Identifiers & Effects Log Page:May Support 00:14:45.296 NVMe-MI Commands & Effects Log Page: May Support 00:14:45.296 Data Area 4 for Telemetry Log: Not Supported 00:14:45.296 Error Log Page Entries Supported: 128 00:14:45.296 Keep Alive: Not Supported 00:14:45.296 00:14:45.296 NVM Command Set Attributes 00:14:45.296 ========================== 00:14:45.296 Submission Queue Entry Size 00:14:45.296 Max: 1 00:14:45.296 Min: 1 00:14:45.296 Completion Queue Entry Size 00:14:45.296 Max: 1 00:14:45.296 Min: 1 00:14:45.296 Number of Namespaces: 0 00:14:45.296 Compare Command: Not Supported 00:14:45.296 Write Uncorrectable Command: Not Supported 00:14:45.296 Dataset Management Command: Not Supported 00:14:45.296 Write Zeroes Command: Not Supported 00:14:45.296 Set Features Save Field: Not Supported 00:14:45.296 Reservations: Not Supported 00:14:45.296 Timestamp: Not Supported 00:14:45.296 Copy: Not Supported 00:14:45.296 Volatile Write Cache: Not Present 00:14:45.296 Atomic Write Unit (Normal): 1 00:14:45.296 Atomic Write Unit (PFail): 1 00:14:45.296 Atomic Compare & Write Unit: 1 00:14:45.296 Fused Compare & Write: Supported 00:14:45.296 Scatter-Gather List 00:14:45.296 SGL Command Set: Supported 00:14:45.296 SGL Keyed: Supported 00:14:45.296 SGL Bit Bucket Descriptor: Not Supported 00:14:45.296 SGL Metadata Pointer: Not Supported 00:14:45.296 Oversized SGL: Not Supported 00:14:45.296 SGL Metadata Address: Not Supported 00:14:45.296 SGL Offset: Supported 00:14:45.296 Transport SGL Data Block: Not Supported 00:14:45.296 Replay Protected Memory Block: Not Supported 00:14:45.296 00:14:45.296 Firmware Slot Information 00:14:45.296 ========================= 00:14:45.296 Active slot: 0 00:14:45.296 00:14:45.296 00:14:45.296 Error Log 00:14:45.296 ========= 00:14:45.296 00:14:45.296 Active Namespaces 00:14:45.296 ================= 00:14:45.296 Discovery Log Page 00:14:45.296 ================== 00:14:45.296 Generation Counter: 2 00:14:45.296 Number of Records: 2 00:14:45.296 Record Format: 0 00:14:45.296 00:14:45.296 Discovery Log Entry 0 00:14:45.296 ---------------------- 00:14:45.296 Transport Type: 3 (TCP) 00:14:45.296 Address Family: 1 (IPv4) 00:14:45.296 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:45.296 Entry Flags: 00:14:45.296 Duplicate Returned Information: 1 00:14:45.296 Explicit Persistent Connection Support for Discovery: 1 00:14:45.296 Transport Requirements: 00:14:45.296 Secure Channel: Not Required 00:14:45.296 Port ID: 0 (0x0000) 00:14:45.296 Controller ID: 65535 (0xffff) 00:14:45.296 Admin Max SQ Size: 128 00:14:45.296 Transport Service Identifier: 4420 00:14:45.296 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:45.296 Transport Address: 10.0.0.3 00:14:45.296 Discovery Log Entry 1 00:14:45.296 ---------------------- 00:14:45.296 Transport Type: 3 (TCP) 00:14:45.296 Address Family: 1 (IPv4) 00:14:45.296 Subsystem Type: 2 (NVM Subsystem) 00:14:45.296 Entry Flags: 00:14:45.296 Duplicate Returned Information: 0 00:14:45.296 Explicit Persistent Connection Support for Discovery: 0 00:14:45.296 Transport Requirements: 00:14:45.296 Secure Channel: Not Required 00:14:45.296 Port ID: 0 (0x0000) 00:14:45.296 Controller ID: 65535 (0xffff) 00:14:45.296 Admin Max SQ Size: 128 00:14:45.296 Transport Service Identifier: 4420 00:14:45.296 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:45.296 Transport Address: 10.0.0.3 [2024-11-20 17:00:03.561978] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:14:45.296 [2024-11-20 17:00:03.561993] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161d740) on tqpair=0x15b9750 00:14:45.296 [2024-11-20 17:00:03.562000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.296 [2024-11-20 17:00:03.562006] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161d8c0) on tqpair=0x15b9750 00:14:45.296 [2024-11-20 17:00:03.562011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.296 [2024-11-20 17:00:03.562016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161da40) on tqpair=0x15b9750 00:14:45.296 [2024-11-20 17:00:03.562021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.296 [2024-11-20 17:00:03.562027] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161dbc0) on tqpair=0x15b9750 00:14:45.296 [2024-11-20 17:00:03.562032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.296 [2024-11-20 17:00:03.562041] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.296 [2024-11-20 17:00:03.562046] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.296 [2024-11-20 17:00:03.562050] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b9750) 00:14:45.296 [2024-11-20 17:00:03.562059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.296 [2024-11-20 17:00:03.562092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161dbc0, cid 3, qid 0 00:14:45.296 [2024-11-20 17:00:03.562160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.296 [2024-11-20 17:00:03.562167] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.296 [2024-11-20 17:00:03.562171] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.296 [2024-11-20 17:00:03.562175] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161dbc0) on tqpair=0x15b9750 00:14:45.296 [2024-11-20 17:00:03.562183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.296 [2024-11-20 17:00:03.562188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.296 [2024-11-20 17:00:03.562192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b9750) 00:14:45.296 [2024-11-20 17:00:03.562199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.296 [2024-11-20 17:00:03.562223] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161dbc0, cid 3, qid 0 00:14:45.296 [2024-11-20 17:00:03.562308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.296 [2024-11-20 17:00:03.562315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.296 [2024-11-20 17:00:03.562319] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.296 [2024-11-20 17:00:03.562323] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161dbc0) on tqpair=0x15b9750 00:14:45.296 [2024-11-20 17:00:03.562328] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:14:45.296 [2024-11-20 17:00:03.562333] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:14:45.296 [2024-11-20 17:00:03.562344] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.296 [2024-11-20 17:00:03.562348] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.296 [2024-11-20 17:00:03.562352] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b9750) 00:14:45.296 [2024-11-20 17:00:03.562360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.296 [2024-11-20 17:00:03.562378] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161dbc0, cid 3, qid 0 00:14:45.296 [2024-11-20 17:00:03.562425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.296 [2024-11-20 17:00:03.562432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.296 [2024-11-20 17:00:03.562436] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.562440] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161dbc0) on tqpair=0x15b9750 00:14:45.297 [2024-11-20 17:00:03.562455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.562461] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.562464] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b9750) 00:14:45.297 [2024-11-20 17:00:03.562472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.297 [2024-11-20 17:00:03.562490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161dbc0, cid 3, qid 0 00:14:45.297 [2024-11-20 17:00:03.562542] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.297 [2024-11-20 17:00:03.562549] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.297 [2024-11-20 17:00:03.562552] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.562556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161dbc0) on tqpair=0x15b9750 00:14:45.297 [2024-11-20 17:00:03.562567] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.562572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.562576] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b9750) 00:14:45.297 [2024-11-20 17:00:03.562584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.297 [2024-11-20 17:00:03.562601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161dbc0, cid 3, qid 0 00:14:45.297 [2024-11-20 17:00:03.562719] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.297 [2024-11-20 17:00:03.562726] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.297 [2024-11-20 17:00:03.562730] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.562734] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161dbc0) on tqpair=0x15b9750 00:14:45.297 [2024-11-20 17:00:03.562745] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.562750] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.562754] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b9750) 00:14:45.297 [2024-11-20 17:00:03.562761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.297 [2024-11-20 17:00:03.562779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161dbc0, cid 3, qid 0 00:14:45.297 [2024-11-20 17:00:03.565900] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.297 [2024-11-20 17:00:03.565925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.297 [2024-11-20 17:00:03.565930] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.565934] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161dbc0) on tqpair=0x15b9750 00:14:45.297 [2024-11-20 17:00:03.565949] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.565954] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.565959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b9750) 00:14:45.297 [2024-11-20 17:00:03.565968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.297 [2024-11-20 17:00:03.565994] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x161dbc0, cid 3, qid 0 00:14:45.297 [2024-11-20 17:00:03.566052] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.297 [2024-11-20 17:00:03.566059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.297 [2024-11-20 17:00:03.566079] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.566083] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x161dbc0) on tqpair=0x15b9750 00:14:45.297 [2024-11-20 17:00:03.566092] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 3 milliseconds 00:14:45.297 00:14:45.297 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:45.297 [2024-11-20 17:00:03.609819] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:14:45.297 [2024-11-20 17:00:03.609889] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74219 ] 00:14:45.297 [2024-11-20 17:00:03.763787] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:14:45.297 [2024-11-20 17:00:03.767880] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:45.297 [2024-11-20 17:00:03.767900] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:45.297 [2024-11-20 17:00:03.767917] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:45.297 [2024-11-20 17:00:03.767927] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:45.297 [2024-11-20 17:00:03.768280] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:14:45.297 [2024-11-20 17:00:03.768355] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x194e750 0 00:14:45.297 [2024-11-20 17:00:03.782874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:45.297 [2024-11-20 17:00:03.782901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:45.297 [2024-11-20 17:00:03.782907] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:45.297 [2024-11-20 17:00:03.782911] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:45.297 [2024-11-20 17:00:03.782942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.782949] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.782954] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x194e750) 00:14:45.297 [2024-11-20 17:00:03.782966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:45.297 [2024-11-20 17:00:03.782998] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2740, cid 0, qid 0 00:14:45.297 [2024-11-20 17:00:03.790882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.297 [2024-11-20 17:00:03.790907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.297 [2024-11-20 17:00:03.790912] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.790917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2740) on tqpair=0x194e750 00:14:45.297 [2024-11-20 17:00:03.790931] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:45.297 [2024-11-20 17:00:03.790939] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:14:45.297 [2024-11-20 17:00:03.790945] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:14:45.297 [2024-11-20 17:00:03.790962] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.790967] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.790971] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x194e750) 00:14:45.297 [2024-11-20 17:00:03.790980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.297 [2024-11-20 17:00:03.791009] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2740, cid 0, qid 0 00:14:45.297 [2024-11-20 17:00:03.791078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.297 [2024-11-20 17:00:03.791085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.297 [2024-11-20 17:00:03.791105] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.791110] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2740) on tqpair=0x194e750 00:14:45.297 [2024-11-20 17:00:03.791116] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:14:45.297 [2024-11-20 17:00:03.791124] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:14:45.297 [2024-11-20 17:00:03.791132] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.791137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.791141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x194e750) 00:14:45.297 [2024-11-20 17:00:03.791149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.297 [2024-11-20 17:00:03.791169] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2740, cid 0, qid 0 00:14:45.297 [2024-11-20 17:00:03.791216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.297 [2024-11-20 17:00:03.791223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.297 [2024-11-20 17:00:03.791226] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.791231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2740) on tqpair=0x194e750 00:14:45.297 [2024-11-20 17:00:03.791237] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:14:45.297 [2024-11-20 17:00:03.791246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:14:45.297 [2024-11-20 17:00:03.791254] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.791258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.791262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x194e750) 00:14:45.297 [2024-11-20 17:00:03.791270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.297 [2024-11-20 17:00:03.791288] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2740, cid 0, qid 0 00:14:45.297 [2024-11-20 17:00:03.791332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.297 [2024-11-20 17:00:03.791339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.297 [2024-11-20 17:00:03.791343] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.791347] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2740) on tqpair=0x194e750 00:14:45.297 [2024-11-20 17:00:03.791353] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:45.297 [2024-11-20 17:00:03.791363] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.791368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.297 [2024-11-20 17:00:03.791372] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x194e750) 00:14:45.298 [2024-11-20 17:00:03.791380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.298 [2024-11-20 17:00:03.791397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2740, cid 0, qid 0 00:14:45.298 [2024-11-20 17:00:03.791447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.298 [2024-11-20 17:00:03.791454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.298 [2024-11-20 17:00:03.791457] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.791462] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2740) on tqpair=0x194e750 00:14:45.298 [2024-11-20 17:00:03.791467] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:14:45.298 [2024-11-20 17:00:03.791472] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:14:45.298 [2024-11-20 17:00:03.791481] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:45.298 [2024-11-20 17:00:03.791592] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:14:45.298 [2024-11-20 17:00:03.791599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:45.298 [2024-11-20 17:00:03.791608] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.791613] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.791617] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x194e750) 00:14:45.298 [2024-11-20 17:00:03.791625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.298 [2024-11-20 17:00:03.791644] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2740, cid 0, qid 0 00:14:45.298 [2024-11-20 17:00:03.791695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.298 [2024-11-20 17:00:03.791702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.298 [2024-11-20 17:00:03.791706] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.791710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2740) on tqpair=0x194e750 00:14:45.298 [2024-11-20 17:00:03.791716] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:45.298 [2024-11-20 17:00:03.791726] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.791731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.791735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x194e750) 00:14:45.298 [2024-11-20 17:00:03.791743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.298 [2024-11-20 17:00:03.791760] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2740, cid 0, qid 0 00:14:45.298 [2024-11-20 17:00:03.791801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.298 [2024-11-20 17:00:03.791808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.298 [2024-11-20 17:00:03.791811] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.791815] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2740) on tqpair=0x194e750 00:14:45.298 [2024-11-20 17:00:03.791821] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:45.298 [2024-11-20 17:00:03.791826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:14:45.298 [2024-11-20 17:00:03.791850] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:14:45.298 [2024-11-20 17:00:03.791868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:14:45.298 [2024-11-20 17:00:03.791894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.791899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x194e750) 00:14:45.298 [2024-11-20 17:00:03.791907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.298 [2024-11-20 17:00:03.791929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2740, cid 0, qid 0 00:14:45.298 [2024-11-20 17:00:03.792033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.298 [2024-11-20 17:00:03.792041] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.298 [2024-11-20 17:00:03.792046] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.792050] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x194e750): datao=0, datal=4096, cccid=0 00:14:45.298 [2024-11-20 17:00:03.792055] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b2740) on tqpair(0x194e750): expected_datao=0, payload_size=4096 00:14:45.298 [2024-11-20 17:00:03.792060] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.792069] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.792074] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.792083] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.298 [2024-11-20 17:00:03.792089] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.298 [2024-11-20 17:00:03.792093] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.792097] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2740) on tqpair=0x194e750 00:14:45.298 [2024-11-20 17:00:03.792106] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:14:45.298 [2024-11-20 17:00:03.792112] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:14:45.298 [2024-11-20 17:00:03.792117] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:14:45.298 [2024-11-20 17:00:03.792122] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:14:45.298 [2024-11-20 17:00:03.792127] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:14:45.298 [2024-11-20 17:00:03.792132] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:14:45.298 [2024-11-20 17:00:03.792147] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:14:45.298 [2024-11-20 17:00:03.792168] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.792174] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.792178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x194e750) 00:14:45.298 [2024-11-20 17:00:03.792189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:45.298 [2024-11-20 17:00:03.792211] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2740, cid 0, qid 0 00:14:45.298 [2024-11-20 17:00:03.792267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.298 [2024-11-20 17:00:03.792275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.298 [2024-11-20 17:00:03.792278] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.792283] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2740) on tqpair=0x194e750 00:14:45.298 [2024-11-20 17:00:03.792291] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.792296] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.792300] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x194e750) 00:14:45.298 [2024-11-20 17:00:03.792307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.298 [2024-11-20 17:00:03.792314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.792318] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.792322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x194e750) 00:14:45.298 [2024-11-20 17:00:03.792328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.298 [2024-11-20 17:00:03.792334] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.792339] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.792342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x194e750) 00:14:45.298 [2024-11-20 17:00:03.792349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.298 [2024-11-20 17:00:03.792356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.792361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.792364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.298 [2024-11-20 17:00:03.792371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.298 [2024-11-20 17:00:03.792376] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:45.298 [2024-11-20 17:00:03.792390] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:45.298 [2024-11-20 17:00:03.792399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.792403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x194e750) 00:14:45.298 [2024-11-20 17:00:03.792410] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.298 [2024-11-20 17:00:03.792434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2740, cid 0, qid 0 00:14:45.298 [2024-11-20 17:00:03.792442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b28c0, cid 1, qid 0 00:14:45.298 [2024-11-20 17:00:03.792447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2a40, cid 2, qid 0 00:14:45.298 [2024-11-20 17:00:03.792452] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.298 [2024-11-20 17:00:03.792457] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2d40, cid 4, qid 0 00:14:45.298 [2024-11-20 17:00:03.792558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.298 [2024-11-20 17:00:03.792565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.298 [2024-11-20 17:00:03.792568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.298 [2024-11-20 17:00:03.792573] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2d40) on tqpair=0x194e750 00:14:45.298 [2024-11-20 17:00:03.792578] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:14:45.299 [2024-11-20 17:00:03.792584] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:45.299 [2024-11-20 17:00:03.792593] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:14:45.299 [2024-11-20 17:00:03.792604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:45.299 [2024-11-20 17:00:03.792612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.792616] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.792620] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x194e750) 00:14:45.299 [2024-11-20 17:00:03.792628] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:45.299 [2024-11-20 17:00:03.792646] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2d40, cid 4, qid 0 00:14:45.299 [2024-11-20 17:00:03.792699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.299 [2024-11-20 17:00:03.792706] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.299 [2024-11-20 17:00:03.792710] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.792714] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2d40) on tqpair=0x194e750 00:14:45.299 [2024-11-20 17:00:03.792778] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:14:45.299 [2024-11-20 17:00:03.792791] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:45.299 [2024-11-20 17:00:03.792800] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.792804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x194e750) 00:14:45.299 [2024-11-20 17:00:03.792812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.299 [2024-11-20 17:00:03.792832] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2d40, cid 4, qid 0 00:14:45.299 [2024-11-20 17:00:03.792905] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.299 [2024-11-20 17:00:03.792914] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.299 [2024-11-20 17:00:03.792917] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.792921] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x194e750): datao=0, datal=4096, cccid=4 00:14:45.299 [2024-11-20 17:00:03.792926] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b2d40) on tqpair(0x194e750): expected_datao=0, payload_size=4096 00:14:45.299 [2024-11-20 17:00:03.792931] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.792939] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.792943] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.792952] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.299 [2024-11-20 17:00:03.792958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.299 [2024-11-20 17:00:03.792961] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.792966] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2d40) on tqpair=0x194e750 00:14:45.299 [2024-11-20 17:00:03.792982] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:14:45.299 [2024-11-20 17:00:03.792993] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:14:45.299 [2024-11-20 17:00:03.793004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:14:45.299 [2024-11-20 17:00:03.793013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.793017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x194e750) 00:14:45.299 [2024-11-20 17:00:03.793025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.299 [2024-11-20 17:00:03.793046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2d40, cid 4, qid 0 00:14:45.299 [2024-11-20 17:00:03.793203] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.299 [2024-11-20 17:00:03.793210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.299 [2024-11-20 17:00:03.793214] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.793218] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x194e750): datao=0, datal=4096, cccid=4 00:14:45.299 [2024-11-20 17:00:03.793223] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b2d40) on tqpair(0x194e750): expected_datao=0, payload_size=4096 00:14:45.299 [2024-11-20 17:00:03.793227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.793235] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.793239] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.793247] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.299 [2024-11-20 17:00:03.793253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.299 [2024-11-20 17:00:03.793257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.793261] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2d40) on tqpair=0x194e750 00:14:45.299 [2024-11-20 17:00:03.793280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:45.299 [2024-11-20 17:00:03.793292] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:45.299 [2024-11-20 17:00:03.793301] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.793305] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x194e750) 00:14:45.299 [2024-11-20 17:00:03.793313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.299 [2024-11-20 17:00:03.793333] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2d40, cid 4, qid 0 00:14:45.299 [2024-11-20 17:00:03.793397] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.299 [2024-11-20 17:00:03.793404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.299 [2024-11-20 17:00:03.793408] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.793411] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x194e750): datao=0, datal=4096, cccid=4 00:14:45.299 [2024-11-20 17:00:03.793416] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b2d40) on tqpair(0x194e750): expected_datao=0, payload_size=4096 00:14:45.299 [2024-11-20 17:00:03.793421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.793429] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.793433] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.793441] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.299 [2024-11-20 17:00:03.793447] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.299 [2024-11-20 17:00:03.793451] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.793455] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2d40) on tqpair=0x194e750 00:14:45.299 [2024-11-20 17:00:03.793464] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:45.299 [2024-11-20 17:00:03.793473] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:14:45.299 [2024-11-20 17:00:03.793484] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:14:45.299 [2024-11-20 17:00:03.793491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:45.299 [2024-11-20 17:00:03.793497] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:45.299 [2024-11-20 17:00:03.793503] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:14:45.299 [2024-11-20 17:00:03.793509] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:14:45.299 [2024-11-20 17:00:03.793514] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:14:45.299 [2024-11-20 17:00:03.793520] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:14:45.299 [2024-11-20 17:00:03.793537] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.793542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x194e750) 00:14:45.299 [2024-11-20 17:00:03.793550] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.299 [2024-11-20 17:00:03.793558] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.793562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.793565] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x194e750) 00:14:45.299 [2024-11-20 17:00:03.793572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.299 [2024-11-20 17:00:03.793598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2d40, cid 4, qid 0 00:14:45.299 [2024-11-20 17:00:03.793605] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2ec0, cid 5, qid 0 00:14:45.299 [2024-11-20 17:00:03.793663] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.299 [2024-11-20 17:00:03.793670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.299 [2024-11-20 17:00:03.793674] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.793678] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2d40) on tqpair=0x194e750 00:14:45.299 [2024-11-20 17:00:03.793685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.299 [2024-11-20 17:00:03.793691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.299 [2024-11-20 17:00:03.793695] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.793699] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2ec0) on tqpair=0x194e750 00:14:45.299 [2024-11-20 17:00:03.793709] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.299 [2024-11-20 17:00:03.793714] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x194e750) 00:14:45.299 [2024-11-20 17:00:03.793722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.299 [2024-11-20 17:00:03.793739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2ec0, cid 5, qid 0 00:14:45.300 [2024-11-20 17:00:03.793784] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.300 [2024-11-20 17:00:03.793791] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.300 [2024-11-20 17:00:03.793794] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.793798] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2ec0) on tqpair=0x194e750 00:14:45.300 [2024-11-20 17:00:03.793809] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.793814] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x194e750) 00:14:45.300 [2024-11-20 17:00:03.793821] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.300 [2024-11-20 17:00:03.793853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2ec0, cid 5, qid 0 00:14:45.300 [2024-11-20 17:00:03.793927] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.300 [2024-11-20 17:00:03.793934] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.300 [2024-11-20 17:00:03.793938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.793942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2ec0) on tqpair=0x194e750 00:14:45.300 [2024-11-20 17:00:03.793953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.793958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x194e750) 00:14:45.300 [2024-11-20 17:00:03.793966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.300 [2024-11-20 17:00:03.793984] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2ec0, cid 5, qid 0 00:14:45.300 [2024-11-20 17:00:03.794030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.300 [2024-11-20 17:00:03.794037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.300 [2024-11-20 17:00:03.794040] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2ec0) on tqpair=0x194e750 00:14:45.300 [2024-11-20 17:00:03.794065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794071] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x194e750) 00:14:45.300 [2024-11-20 17:00:03.794079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.300 [2024-11-20 17:00:03.794087] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x194e750) 00:14:45.300 [2024-11-20 17:00:03.794098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.300 [2024-11-20 17:00:03.794106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x194e750) 00:14:45.300 [2024-11-20 17:00:03.794117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.300 [2024-11-20 17:00:03.794126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794130] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x194e750) 00:14:45.300 [2024-11-20 17:00:03.794136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.300 [2024-11-20 17:00:03.794157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2ec0, cid 5, qid 0 00:14:45.300 [2024-11-20 17:00:03.794165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2d40, cid 4, qid 0 00:14:45.300 [2024-11-20 17:00:03.794170] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b3040, cid 6, qid 0 00:14:45.300 [2024-11-20 17:00:03.794175] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b31c0, cid 7, qid 0 00:14:45.300 [2024-11-20 17:00:03.794319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.300 [2024-11-20 17:00:03.794326] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.300 [2024-11-20 17:00:03.794330] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794334] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x194e750): datao=0, datal=8192, cccid=5 00:14:45.300 [2024-11-20 17:00:03.794339] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b2ec0) on tqpair(0x194e750): expected_datao=0, payload_size=8192 00:14:45.300 [2024-11-20 17:00:03.794344] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794361] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794366] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.300 [2024-11-20 17:00:03.794378] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.300 [2024-11-20 17:00:03.794382] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794386] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x194e750): datao=0, datal=512, cccid=4 00:14:45.300 [2024-11-20 17:00:03.794391] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b2d40) on tqpair(0x194e750): expected_datao=0, payload_size=512 00:14:45.300 [2024-11-20 17:00:03.794396] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794402] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794406] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.300 [2024-11-20 17:00:03.794418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.300 [2024-11-20 17:00:03.794422] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794426] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x194e750): datao=0, datal=512, cccid=6 00:14:45.300 [2024-11-20 17:00:03.794431] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b3040) on tqpair(0x194e750): expected_datao=0, payload_size=512 00:14:45.300 [2024-11-20 17:00:03.794435] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794442] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794446] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.300 [2024-11-20 17:00:03.794458] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.300 [2024-11-20 17:00:03.794461] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794465] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x194e750): datao=0, datal=4096, cccid=7 00:14:45.300 [2024-11-20 17:00:03.794470] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b31c0) on tqpair(0x194e750): expected_datao=0, payload_size=4096 00:14:45.300 [2024-11-20 17:00:03.794475] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794481] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794485] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794494] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.300 [2024-11-20 17:00:03.794500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.300 [2024-11-20 17:00:03.794504] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2ec0) on tqpair=0x194e750 00:14:45.300 [2024-11-20 17:00:03.794525] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.300 [2024-11-20 17:00:03.794532] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.300 [2024-11-20 17:00:03.794536] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2d40) on tqpair=0x194e750 00:14:45.300 [2024-11-20 17:00:03.794553] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.300 [2024-11-20 17:00:03.794560] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.300 [2024-11-20 17:00:03.794564] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794568] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b3040) on tqpair=0x194e750 00:14:45.300 [2024-11-20 17:00:03.794576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.300 [2024-11-20 17:00:03.794582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.300 [2024-11-20 17:00:03.794586] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.300 [2024-11-20 17:00:03.794590] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b31c0) on tqpair=0x194e750 00:14:45.300 ===================================================== 00:14:45.300 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:45.300 ===================================================== 00:14:45.300 Controller Capabilities/Features 00:14:45.300 ================================ 00:14:45.300 Vendor ID: 8086 00:14:45.300 Subsystem Vendor ID: 8086 00:14:45.300 Serial Number: SPDK00000000000001 00:14:45.300 Model Number: SPDK bdev Controller 00:14:45.301 Firmware Version: 25.01 00:14:45.301 Recommended Arb Burst: 6 00:14:45.301 IEEE OUI Identifier: e4 d2 5c 00:14:45.301 Multi-path I/O 00:14:45.301 May have multiple subsystem ports: Yes 00:14:45.301 May have multiple controllers: Yes 00:14:45.301 Associated with SR-IOV VF: No 00:14:45.301 Max Data Transfer Size: 131072 00:14:45.301 Max Number of Namespaces: 32 00:14:45.301 Max Number of I/O Queues: 127 00:14:45.301 NVMe Specification Version (VS): 1.3 00:14:45.301 NVMe Specification Version (Identify): 1.3 00:14:45.301 Maximum Queue Entries: 128 00:14:45.301 Contiguous Queues Required: Yes 00:14:45.301 Arbitration Mechanisms Supported 00:14:45.301 Weighted Round Robin: Not Supported 00:14:45.301 Vendor Specific: Not Supported 00:14:45.301 Reset Timeout: 15000 ms 00:14:45.301 Doorbell Stride: 4 bytes 00:14:45.301 NVM Subsystem Reset: Not Supported 00:14:45.301 Command Sets Supported 00:14:45.301 NVM Command Set: Supported 00:14:45.301 Boot Partition: Not Supported 00:14:45.301 Memory Page Size Minimum: 4096 bytes 00:14:45.301 Memory Page Size Maximum: 4096 bytes 00:14:45.301 Persistent Memory Region: Not Supported 00:14:45.301 Optional Asynchronous Events Supported 00:14:45.301 Namespace Attribute Notices: Supported 00:14:45.301 Firmware Activation Notices: Not Supported 00:14:45.301 ANA Change Notices: Not Supported 00:14:45.301 PLE Aggregate Log Change Notices: Not Supported 00:14:45.301 LBA Status Info Alert Notices: Not Supported 00:14:45.301 EGE Aggregate Log Change Notices: Not Supported 00:14:45.301 Normal NVM Subsystem Shutdown event: Not Supported 00:14:45.301 Zone Descriptor Change Notices: Not Supported 00:14:45.301 Discovery Log Change Notices: Not Supported 00:14:45.301 Controller Attributes 00:14:45.301 128-bit Host Identifier: Supported 00:14:45.301 Non-Operational Permissive Mode: Not Supported 00:14:45.301 NVM Sets: Not Supported 00:14:45.301 Read Recovery Levels: Not Supported 00:14:45.301 Endurance Groups: Not Supported 00:14:45.301 Predictable Latency Mode: Not Supported 00:14:45.301 Traffic Based Keep ALive: Not Supported 00:14:45.301 Namespace Granularity: Not Supported 00:14:45.301 SQ Associations: Not Supported 00:14:45.301 UUID List: Not Supported 00:14:45.301 Multi-Domain Subsystem: Not Supported 00:14:45.301 Fixed Capacity Management: Not Supported 00:14:45.301 Variable Capacity Management: Not Supported 00:14:45.301 Delete Endurance Group: Not Supported 00:14:45.301 Delete NVM Set: Not Supported 00:14:45.301 Extended LBA Formats Supported: Not Supported 00:14:45.301 Flexible Data Placement Supported: Not Supported 00:14:45.301 00:14:45.301 Controller Memory Buffer Support 00:14:45.301 ================================ 00:14:45.301 Supported: No 00:14:45.301 00:14:45.301 Persistent Memory Region Support 00:14:45.301 ================================ 00:14:45.301 Supported: No 00:14:45.301 00:14:45.301 Admin Command Set Attributes 00:14:45.301 ============================ 00:14:45.301 Security Send/Receive: Not Supported 00:14:45.301 Format NVM: Not Supported 00:14:45.301 Firmware Activate/Download: Not Supported 00:14:45.301 Namespace Management: Not Supported 00:14:45.301 Device Self-Test: Not Supported 00:14:45.301 Directives: Not Supported 00:14:45.301 NVMe-MI: Not Supported 00:14:45.301 Virtualization Management: Not Supported 00:14:45.301 Doorbell Buffer Config: Not Supported 00:14:45.301 Get LBA Status Capability: Not Supported 00:14:45.301 Command & Feature Lockdown Capability: Not Supported 00:14:45.301 Abort Command Limit: 4 00:14:45.301 Async Event Request Limit: 4 00:14:45.301 Number of Firmware Slots: N/A 00:14:45.301 Firmware Slot 1 Read-Only: N/A 00:14:45.301 Firmware Activation Without Reset: N/A 00:14:45.301 Multiple Update Detection Support: N/A 00:14:45.301 Firmware Update Granularity: No Information Provided 00:14:45.301 Per-Namespace SMART Log: No 00:14:45.301 Asymmetric Namespace Access Log Page: Not Supported 00:14:45.301 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:45.301 Command Effects Log Page: Supported 00:14:45.301 Get Log Page Extended Data: Supported 00:14:45.301 Telemetry Log Pages: Not Supported 00:14:45.301 Persistent Event Log Pages: Not Supported 00:14:45.301 Supported Log Pages Log Page: May Support 00:14:45.301 Commands Supported & Effects Log Page: Not Supported 00:14:45.301 Feature Identifiers & Effects Log Page:May Support 00:14:45.301 NVMe-MI Commands & Effects Log Page: May Support 00:14:45.301 Data Area 4 for Telemetry Log: Not Supported 00:14:45.301 Error Log Page Entries Supported: 128 00:14:45.301 Keep Alive: Supported 00:14:45.301 Keep Alive Granularity: 10000 ms 00:14:45.301 00:14:45.301 NVM Command Set Attributes 00:14:45.301 ========================== 00:14:45.301 Submission Queue Entry Size 00:14:45.301 Max: 64 00:14:45.301 Min: 64 00:14:45.301 Completion Queue Entry Size 00:14:45.301 Max: 16 00:14:45.301 Min: 16 00:14:45.301 Number of Namespaces: 32 00:14:45.301 Compare Command: Supported 00:14:45.301 Write Uncorrectable Command: Not Supported 00:14:45.301 Dataset Management Command: Supported 00:14:45.301 Write Zeroes Command: Supported 00:14:45.301 Set Features Save Field: Not Supported 00:14:45.301 Reservations: Supported 00:14:45.301 Timestamp: Not Supported 00:14:45.301 Copy: Supported 00:14:45.301 Volatile Write Cache: Present 00:14:45.301 Atomic Write Unit (Normal): 1 00:14:45.301 Atomic Write Unit (PFail): 1 00:14:45.301 Atomic Compare & Write Unit: 1 00:14:45.301 Fused Compare & Write: Supported 00:14:45.301 Scatter-Gather List 00:14:45.301 SGL Command Set: Supported 00:14:45.301 SGL Keyed: Supported 00:14:45.301 SGL Bit Bucket Descriptor: Not Supported 00:14:45.301 SGL Metadata Pointer: Not Supported 00:14:45.301 Oversized SGL: Not Supported 00:14:45.301 SGL Metadata Address: Not Supported 00:14:45.301 SGL Offset: Supported 00:14:45.301 Transport SGL Data Block: Not Supported 00:14:45.301 Replay Protected Memory Block: Not Supported 00:14:45.301 00:14:45.301 Firmware Slot Information 00:14:45.301 ========================= 00:14:45.301 Active slot: 1 00:14:45.301 Slot 1 Firmware Revision: 25.01 00:14:45.301 00:14:45.301 00:14:45.301 Commands Supported and Effects 00:14:45.301 ============================== 00:14:45.301 Admin Commands 00:14:45.301 -------------- 00:14:45.301 Get Log Page (02h): Supported 00:14:45.301 Identify (06h): Supported 00:14:45.301 Abort (08h): Supported 00:14:45.301 Set Features (09h): Supported 00:14:45.301 Get Features (0Ah): Supported 00:14:45.301 Asynchronous Event Request (0Ch): Supported 00:14:45.301 Keep Alive (18h): Supported 00:14:45.301 I/O Commands 00:14:45.301 ------------ 00:14:45.301 Flush (00h): Supported LBA-Change 00:14:45.301 Write (01h): Supported LBA-Change 00:14:45.301 Read (02h): Supported 00:14:45.301 Compare (05h): Supported 00:14:45.301 Write Zeroes (08h): Supported LBA-Change 00:14:45.301 Dataset Management (09h): Supported LBA-Change 00:14:45.301 Copy (19h): Supported LBA-Change 00:14:45.301 00:14:45.301 Error Log 00:14:45.301 ========= 00:14:45.301 00:14:45.301 Arbitration 00:14:45.301 =========== 00:14:45.301 Arbitration Burst: 1 00:14:45.301 00:14:45.301 Power Management 00:14:45.301 ================ 00:14:45.301 Number of Power States: 1 00:14:45.301 Current Power State: Power State #0 00:14:45.301 Power State #0: 00:14:45.301 Max Power: 0.00 W 00:14:45.301 Non-Operational State: Operational 00:14:45.301 Entry Latency: Not Reported 00:14:45.301 Exit Latency: Not Reported 00:14:45.301 Relative Read Throughput: 0 00:14:45.301 Relative Read Latency: 0 00:14:45.301 Relative Write Throughput: 0 00:14:45.301 Relative Write Latency: 0 00:14:45.301 Idle Power: Not Reported 00:14:45.301 Active Power: Not Reported 00:14:45.301 Non-Operational Permissive Mode: Not Supported 00:14:45.301 00:14:45.301 Health Information 00:14:45.301 ================== 00:14:45.301 Critical Warnings: 00:14:45.301 Available Spare Space: OK 00:14:45.301 Temperature: OK 00:14:45.301 Device Reliability: OK 00:14:45.301 Read Only: No 00:14:45.301 Volatile Memory Backup: OK 00:14:45.301 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:45.301 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:45.301 Available Spare: 0% 00:14:45.301 Available Spare Threshold: 0% 00:14:45.301 Life Percentage Used:[2024-11-20 17:00:03.794714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.301 [2024-11-20 17:00:03.794721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x194e750) 00:14:45.302 [2024-11-20 17:00:03.794729] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.302 [2024-11-20 17:00:03.794752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b31c0, cid 7, qid 0 00:14:45.302 [2024-11-20 17:00:03.794797] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.302 [2024-11-20 17:00:03.794804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.302 [2024-11-20 17:00:03.794808] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.794812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b31c0) on tqpair=0x194e750 00:14:45.302 [2024-11-20 17:00:03.798919] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:14:45.302 [2024-11-20 17:00:03.798950] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2740) on tqpair=0x194e750 00:14:45.302 [2024-11-20 17:00:03.798959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.302 [2024-11-20 17:00:03.798965] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b28c0) on tqpair=0x194e750 00:14:45.302 [2024-11-20 17:00:03.798970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.302 [2024-11-20 17:00:03.798976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2a40) on tqpair=0x194e750 00:14:45.302 [2024-11-20 17:00:03.798980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.302 [2024-11-20 17:00:03.798985] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.302 [2024-11-20 17:00:03.798990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.302 [2024-11-20 17:00:03.799002] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799007] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799010] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.302 [2024-11-20 17:00:03.799019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.302 [2024-11-20 17:00:03.799048] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.302 [2024-11-20 17:00:03.799098] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.302 [2024-11-20 17:00:03.799106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.302 [2024-11-20 17:00:03.799110] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799114] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.302 [2024-11-20 17:00:03.799122] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799127] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799131] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.302 [2024-11-20 17:00:03.799138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.302 [2024-11-20 17:00:03.799176] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.302 [2024-11-20 17:00:03.799243] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.302 [2024-11-20 17:00:03.799250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.302 [2024-11-20 17:00:03.799254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.302 [2024-11-20 17:00:03.799264] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:14:45.302 [2024-11-20 17:00:03.799269] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:14:45.302 [2024-11-20 17:00:03.799280] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799289] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.302 [2024-11-20 17:00:03.799296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.302 [2024-11-20 17:00:03.799314] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.302 [2024-11-20 17:00:03.799357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.302 [2024-11-20 17:00:03.799364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.302 [2024-11-20 17:00:03.799368] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.302 [2024-11-20 17:00:03.799384] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799389] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.302 [2024-11-20 17:00:03.799400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.302 [2024-11-20 17:00:03.799417] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.302 [2024-11-20 17:00:03.799460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.302 [2024-11-20 17:00:03.799467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.302 [2024-11-20 17:00:03.799471] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799475] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.302 [2024-11-20 17:00:03.799485] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799490] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799494] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.302 [2024-11-20 17:00:03.799501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.302 [2024-11-20 17:00:03.799518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.302 [2024-11-20 17:00:03.799567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.302 [2024-11-20 17:00:03.799574] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.302 [2024-11-20 17:00:03.799577] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799581] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.302 [2024-11-20 17:00:03.799592] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.302 [2024-11-20 17:00:03.799608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.302 [2024-11-20 17:00:03.799625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.302 [2024-11-20 17:00:03.799671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.302 [2024-11-20 17:00:03.799677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.302 [2024-11-20 17:00:03.799681] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799685] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.302 [2024-11-20 17:00:03.799696] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.302 [2024-11-20 17:00:03.799712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.302 [2024-11-20 17:00:03.799729] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.302 [2024-11-20 17:00:03.799778] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.302 [2024-11-20 17:00:03.799785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.302 [2024-11-20 17:00:03.799788] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799793] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.302 [2024-11-20 17:00:03.799803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799812] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.302 [2024-11-20 17:00:03.799819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.302 [2024-11-20 17:00:03.799836] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.302 [2024-11-20 17:00:03.799892] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.302 [2024-11-20 17:00:03.799900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.302 [2024-11-20 17:00:03.799904] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.302 [2024-11-20 17:00:03.799919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.799928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.302 [2024-11-20 17:00:03.799936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.302 [2024-11-20 17:00:03.799954] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.302 [2024-11-20 17:00:03.799999] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.302 [2024-11-20 17:00:03.800005] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.302 [2024-11-20 17:00:03.800009] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.800013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.302 [2024-11-20 17:00:03.800024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.800029] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.302 [2024-11-20 17:00:03.800032] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.302 [2024-11-20 17:00:03.800040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.302 [2024-11-20 17:00:03.800057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.303 [2024-11-20 17:00:03.800103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.303 [2024-11-20 17:00:03.800110] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.303 [2024-11-20 17:00:03.800113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.800118] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.303 [2024-11-20 17:00:03.800128] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.800133] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.800136] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.303 [2024-11-20 17:00:03.800144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.303 [2024-11-20 17:00:03.800188] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.303 [2024-11-20 17:00:03.800235] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.303 [2024-11-20 17:00:03.800246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.303 [2024-11-20 17:00:03.800251] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.800255] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.303 [2024-11-20 17:00:03.800279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.800284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.800288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.303 [2024-11-20 17:00:03.800296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.303 [2024-11-20 17:00:03.800314] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.303 [2024-11-20 17:00:03.800359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.303 [2024-11-20 17:00:03.800366] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.303 [2024-11-20 17:00:03.800370] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.800374] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.303 [2024-11-20 17:00:03.800385] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.800390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.800394] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.303 [2024-11-20 17:00:03.800402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.303 [2024-11-20 17:00:03.800419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.303 [2024-11-20 17:00:03.800464] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.303 [2024-11-20 17:00:03.800471] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.303 [2024-11-20 17:00:03.800475] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.800480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.303 [2024-11-20 17:00:03.800506] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.800511] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.800515] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.303 [2024-11-20 17:00:03.800522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.303 [2024-11-20 17:00:03.800539] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.303 [2024-11-20 17:00:03.800585] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.303 [2024-11-20 17:00:03.800592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.303 [2024-11-20 17:00:03.800595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.800600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.303 [2024-11-20 17:00:03.800610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.800615] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.800619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.303 [2024-11-20 17:00:03.800626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.303 [2024-11-20 17:00:03.800642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.303 [2024-11-20 17:00:03.800700] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.303 [2024-11-20 17:00:03.800707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.303 [2024-11-20 17:00:03.800711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.800715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.303 [2024-11-20 17:00:03.800725] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.800730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.800734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.303 [2024-11-20 17:00:03.800741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.303 [2024-11-20 17:00:03.800758] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.303 [2024-11-20 17:00:03.800802] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.303 [2024-11-20 17:00:03.800809] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.303 [2024-11-20 17:00:03.800812] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.800816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.303 [2024-11-20 17:00:03.800827] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.800832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.800836] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.303 [2024-11-20 17:00:03.800854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.303 [2024-11-20 17:00:03.800874] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.303 [2024-11-20 17:00:03.800923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.303 [2024-11-20 17:00:03.800930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.303 [2024-11-20 17:00:03.800934] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.800938] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.303 [2024-11-20 17:00:03.800949] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.800954] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.800958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.303 [2024-11-20 17:00:03.800965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.303 [2024-11-20 17:00:03.800982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.303 [2024-11-20 17:00:03.801025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.303 [2024-11-20 17:00:03.801032] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.303 [2024-11-20 17:00:03.801035] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.801039] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.303 [2024-11-20 17:00:03.801050] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.801055] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.801059] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.303 [2024-11-20 17:00:03.801066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.303 [2024-11-20 17:00:03.801082] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.303 [2024-11-20 17:00:03.801126] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.303 [2024-11-20 17:00:03.801133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.303 [2024-11-20 17:00:03.801136] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.801140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.303 [2024-11-20 17:00:03.801151] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.801156] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.801160] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.303 [2024-11-20 17:00:03.801167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.303 [2024-11-20 17:00:03.801184] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.303 [2024-11-20 17:00:03.801225] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.303 [2024-11-20 17:00:03.801236] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.303 [2024-11-20 17:00:03.801241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.801245] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.303 [2024-11-20 17:00:03.801257] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.801262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.801266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.303 [2024-11-20 17:00:03.801273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.303 [2024-11-20 17:00:03.801291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.303 [2024-11-20 17:00:03.801337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.303 [2024-11-20 17:00:03.801348] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.303 [2024-11-20 17:00:03.801353] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.801357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.303 [2024-11-20 17:00:03.801368] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.801373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.303 [2024-11-20 17:00:03.801377] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.303 [2024-11-20 17:00:03.801384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.304 [2024-11-20 17:00:03.801401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.304 [2024-11-20 17:00:03.801443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.304 [2024-11-20 17:00:03.801449] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.304 [2024-11-20 17:00:03.801453] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.801457] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.304 [2024-11-20 17:00:03.801468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.801473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.801477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.304 [2024-11-20 17:00:03.801484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.304 [2024-11-20 17:00:03.801501] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.304 [2024-11-20 17:00:03.801550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.304 [2024-11-20 17:00:03.801565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.304 [2024-11-20 17:00:03.801570] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.801574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.304 [2024-11-20 17:00:03.801585] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.801590] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.801594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.304 [2024-11-20 17:00:03.801602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.304 [2024-11-20 17:00:03.801620] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.304 [2024-11-20 17:00:03.801669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.304 [2024-11-20 17:00:03.801680] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.304 [2024-11-20 17:00:03.801684] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.801689] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.304 [2024-11-20 17:00:03.801700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.801704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.801708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.304 [2024-11-20 17:00:03.801716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.304 [2024-11-20 17:00:03.801733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.304 [2024-11-20 17:00:03.801775] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.304 [2024-11-20 17:00:03.801782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.304 [2024-11-20 17:00:03.801785] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.801789] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.304 [2024-11-20 17:00:03.801800] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.801805] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.801809] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.304 [2024-11-20 17:00:03.801816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.304 [2024-11-20 17:00:03.801862] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.304 [2024-11-20 17:00:03.801912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.304 [2024-11-20 17:00:03.801919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.304 [2024-11-20 17:00:03.801923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.801927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.304 [2024-11-20 17:00:03.801938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.801944] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.801948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.304 [2024-11-20 17:00:03.801955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.304 [2024-11-20 17:00:03.801973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.304 [2024-11-20 17:00:03.802015] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.304 [2024-11-20 17:00:03.802022] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.304 [2024-11-20 17:00:03.802026] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.802030] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.304 [2024-11-20 17:00:03.802041] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.802046] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.802050] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.304 [2024-11-20 17:00:03.802058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.304 [2024-11-20 17:00:03.802075] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.304 [2024-11-20 17:00:03.802123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.304 [2024-11-20 17:00:03.802130] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.304 [2024-11-20 17:00:03.802133] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.802138] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.304 [2024-11-20 17:00:03.802148] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.802153] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.802157] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.304 [2024-11-20 17:00:03.802165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.304 [2024-11-20 17:00:03.802182] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.304 [2024-11-20 17:00:03.802244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.304 [2024-11-20 17:00:03.802268] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.304 [2024-11-20 17:00:03.802272] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.802276] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.304 [2024-11-20 17:00:03.802287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.802292] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.802296] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.304 [2024-11-20 17:00:03.802303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.304 [2024-11-20 17:00:03.802320] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.304 [2024-11-20 17:00:03.802365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.304 [2024-11-20 17:00:03.802372] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.304 [2024-11-20 17:00:03.802376] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.802380] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.304 [2024-11-20 17:00:03.802391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.802396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.802400] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.304 [2024-11-20 17:00:03.802407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.304 [2024-11-20 17:00:03.802424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.304 [2024-11-20 17:00:03.802466] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.304 [2024-11-20 17:00:03.802473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.304 [2024-11-20 17:00:03.802477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.802481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.304 [2024-11-20 17:00:03.802492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.802497] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.802501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.304 [2024-11-20 17:00:03.802508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.304 [2024-11-20 17:00:03.802524] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.304 [2024-11-20 17:00:03.802575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.304 [2024-11-20 17:00:03.802595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.304 [2024-11-20 17:00:03.802600] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.802604] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.304 [2024-11-20 17:00:03.802616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.304 [2024-11-20 17:00:03.802621] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.305 [2024-11-20 17:00:03.802625] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.305 [2024-11-20 17:00:03.802633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.305 [2024-11-20 17:00:03.802652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.305 [2024-11-20 17:00:03.802697] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.305 [2024-11-20 17:00:03.802704] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.305 [2024-11-20 17:00:03.802708] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.305 [2024-11-20 17:00:03.802712] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.305 [2024-11-20 17:00:03.802723] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.305 [2024-11-20 17:00:03.802728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.305 [2024-11-20 17:00:03.802732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.305 [2024-11-20 17:00:03.802739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.305 [2024-11-20 17:00:03.802756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.305 [2024-11-20 17:00:03.802804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.305 [2024-11-20 17:00:03.802811] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.305 [2024-11-20 17:00:03.802815] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.305 [2024-11-20 17:00:03.802819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.305 [2024-11-20 17:00:03.805878] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.305 [2024-11-20 17:00:03.805894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.305 [2024-11-20 17:00:03.805899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x194e750) 00:14:45.305 [2024-11-20 17:00:03.805909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.305 [2024-11-20 17:00:03.805935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b2bc0, cid 3, qid 0 00:14:45.305 [2024-11-20 17:00:03.806002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.305 [2024-11-20 17:00:03.806010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.305 [2024-11-20 17:00:03.806014] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.305 [2024-11-20 17:00:03.806018] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19b2bc0) on tqpair=0x194e750 00:14:45.305 [2024-11-20 17:00:03.806027] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:14:45.305 0% 00:14:45.305 Data Units Read: 0 00:14:45.305 Data Units Written: 0 00:14:45.305 Host Read Commands: 0 00:14:45.305 Host Write Commands: 0 00:14:45.305 Controller Busy Time: 0 minutes 00:14:45.305 Power Cycles: 0 00:14:45.305 Power On Hours: 0 hours 00:14:45.305 Unsafe Shutdowns: 0 00:14:45.305 Unrecoverable Media Errors: 0 00:14:45.305 Lifetime Error Log Entries: 0 00:14:45.305 Warning Temperature Time: 0 minutes 00:14:45.305 Critical Temperature Time: 0 minutes 00:14:45.305 00:14:45.305 Number of Queues 00:14:45.305 ================ 00:14:45.305 Number of I/O Submission Queues: 127 00:14:45.305 Number of I/O Completion Queues: 127 00:14:45.305 00:14:45.305 Active Namespaces 00:14:45.305 ================= 00:14:45.305 Namespace ID:1 00:14:45.305 Error Recovery Timeout: Unlimited 00:14:45.305 Command Set Identifier: NVM (00h) 00:14:45.305 Deallocate: Supported 00:14:45.305 Deallocated/Unwritten Error: Not Supported 00:14:45.305 Deallocated Read Value: Unknown 00:14:45.305 Deallocate in Write Zeroes: Not Supported 00:14:45.305 Deallocated Guard Field: 0xFFFF 00:14:45.305 Flush: Supported 00:14:45.305 Reservation: Supported 00:14:45.305 Namespace Sharing Capabilities: Multiple Controllers 00:14:45.305 Size (in LBAs): 131072 (0GiB) 00:14:45.305 Capacity (in LBAs): 131072 (0GiB) 00:14:45.305 Utilization (in LBAs): 131072 (0GiB) 00:14:45.305 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:45.305 EUI64: ABCDEF0123456789 00:14:45.305 UUID: db8ab385-f510-4096-8cd9-ae4de90c5c05 00:14:45.305 Thin Provisioning: Not Supported 00:14:45.305 Per-NS Atomic Units: Yes 00:14:45.305 Atomic Boundary Size (Normal): 0 00:14:45.305 Atomic Boundary Size (PFail): 0 00:14:45.305 Atomic Boundary Offset: 0 00:14:45.305 Maximum Single Source Range Length: 65535 00:14:45.305 Maximum Copy Length: 65535 00:14:45.305 Maximum Source Range Count: 1 00:14:45.305 NGUID/EUI64 Never Reused: No 00:14:45.305 Namespace Write Protected: No 00:14:45.305 Number of LBA Formats: 1 00:14:45.305 Current LBA Format: LBA Format #00 00:14:45.305 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:45.305 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:45.563 rmmod nvme_tcp 00:14:45.563 rmmod nvme_fabrics 00:14:45.563 rmmod nvme_keyring 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74188 ']' 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74188 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 74188 ']' 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 74188 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74188 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:45.563 killing process with pid 74188 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74188' 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 74188 00:14:45.563 17:00:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 74188 00:14:45.822 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:45.822 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:45.822 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:45.822 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:14:45.822 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:14:45.822 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:45.822 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:14:45.822 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:45.822 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:45.822 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:45.822 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:45.822 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:45.822 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:45.822 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:45.822 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:45.822 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:45.822 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:45.822 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:46.081 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:46.081 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:46.081 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:46.081 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:46.081 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:46.081 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.081 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:46.081 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.081 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:14:46.081 00:14:46.081 real 0m2.399s 00:14:46.081 user 0m4.814s 00:14:46.081 sys 0m0.769s 00:14:46.081 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:46.081 17:00:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:46.081 ************************************ 00:14:46.081 END TEST nvmf_identify 00:14:46.081 ************************************ 00:14:46.081 17:00:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:46.081 17:00:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:46.081 17:00:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:46.081 17:00:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:46.081 ************************************ 00:14:46.081 START TEST nvmf_perf 00:14:46.081 ************************************ 00:14:46.081 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:46.340 * Looking for test storage... 00:14:46.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:46.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.341 --rc genhtml_branch_coverage=1 00:14:46.341 --rc genhtml_function_coverage=1 00:14:46.341 --rc genhtml_legend=1 00:14:46.341 --rc geninfo_all_blocks=1 00:14:46.341 --rc geninfo_unexecuted_blocks=1 00:14:46.341 00:14:46.341 ' 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:46.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.341 --rc genhtml_branch_coverage=1 00:14:46.341 --rc genhtml_function_coverage=1 00:14:46.341 --rc genhtml_legend=1 00:14:46.341 --rc geninfo_all_blocks=1 00:14:46.341 --rc geninfo_unexecuted_blocks=1 00:14:46.341 00:14:46.341 ' 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:46.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.341 --rc genhtml_branch_coverage=1 00:14:46.341 --rc genhtml_function_coverage=1 00:14:46.341 --rc genhtml_legend=1 00:14:46.341 --rc geninfo_all_blocks=1 00:14:46.341 --rc geninfo_unexecuted_blocks=1 00:14:46.341 00:14:46.341 ' 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:46.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.341 --rc genhtml_branch_coverage=1 00:14:46.341 --rc genhtml_function_coverage=1 00:14:46.341 --rc genhtml_legend=1 00:14:46.341 --rc geninfo_all_blocks=1 00:14:46.341 --rc geninfo_unexecuted_blocks=1 00:14:46.341 00:14:46.341 ' 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.341 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:46.342 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:46.342 Cannot find device "nvmf_init_br" 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:46.342 Cannot find device "nvmf_init_br2" 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:46.342 Cannot find device "nvmf_tgt_br" 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:46.342 Cannot find device "nvmf_tgt_br2" 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:46.342 Cannot find device "nvmf_init_br" 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:46.342 Cannot find device "nvmf_init_br2" 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:46.342 Cannot find device "nvmf_tgt_br" 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:46.342 Cannot find device "nvmf_tgt_br2" 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:14:46.342 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:46.601 Cannot find device "nvmf_br" 00:14:46.601 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:14:46.601 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:46.601 Cannot find device "nvmf_init_if" 00:14:46.601 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:14:46.601 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:46.601 Cannot find device "nvmf_init_if2" 00:14:46.601 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:14:46.601 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:46.601 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:46.601 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:14:46.601 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:46.601 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:46.601 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:14:46.601 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:46.601 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:46.601 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:46.601 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:46.601 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:46.601 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:46.601 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:46.601 17:00:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:46.601 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:46.601 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:14:46.601 00:14:46.601 --- 10.0.0.3 ping statistics --- 00:14:46.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.601 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:46.601 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:46.601 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:14:46.601 00:14:46.601 --- 10.0.0.4 ping statistics --- 00:14:46.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.601 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:46.601 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:46.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:14:46.859 00:14:46.860 --- 10.0.0.1 ping statistics --- 00:14:46.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.860 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:46.860 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:46.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:14:46.860 00:14:46.860 --- 10.0.0.2 ping statistics --- 00:14:46.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.860 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:14:46.860 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.860 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:14:46.860 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:46.860 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.860 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:46.860 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:46.860 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.860 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:46.860 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:46.860 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:46.860 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:46.860 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:46.860 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:46.860 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74450 00:14:46.860 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:46.860 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74450 00:14:46.860 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74450 ']' 00:14:46.860 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.860 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.860 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.860 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.860 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:46.860 [2024-11-20 17:00:05.232398] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:14:46.860 [2024-11-20 17:00:05.232508] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.860 [2024-11-20 17:00:05.379479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:47.118 [2024-11-20 17:00:05.433099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.118 [2024-11-20 17:00:05.433338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.118 [2024-11-20 17:00:05.433428] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.118 [2024-11-20 17:00:05.433510] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.118 [2024-11-20 17:00:05.433574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.118 [2024-11-20 17:00:05.434819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.118 [2024-11-20 17:00:05.436866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.118 [2024-11-20 17:00:05.437007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:47.118 [2024-11-20 17:00:05.437054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.118 [2024-11-20 17:00:05.492656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:47.118 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.118 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:14:47.118 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:47.118 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:47.118 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:47.118 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.118 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:47.118 17:00:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:47.686 17:00:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:47.686 17:00:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:47.945 17:00:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:47.945 17:00:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:48.203 17:00:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:48.203 17:00:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:48.203 17:00:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:48.203 17:00:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:48.203 17:00:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:48.462 [2024-11-20 17:00:06.941886] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.462 17:00:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:48.721 17:00:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:48.721 17:00:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:49.287 17:00:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:49.287 17:00:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:49.287 17:00:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:49.546 [2024-11-20 17:00:07.999726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:49.546 17:00:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:49.804 17:00:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:14:49.804 17:00:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:49.804 17:00:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:49.804 17:00:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:51.182 Initializing NVMe Controllers 00:14:51.182 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:51.182 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:51.182 Initialization complete. Launching workers. 00:14:51.182 ======================================================== 00:14:51.182 Latency(us) 00:14:51.182 Device Information : IOPS MiB/s Average min max 00:14:51.182 PCIE (0000:00:10.0) NSID 1 from core 0: 23647.93 92.37 1353.23 402.97 6492.38 00:14:51.182 ======================================================== 00:14:51.182 Total : 23647.93 92.37 1353.23 402.97 6492.38 00:14:51.182 00:14:51.182 17:00:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:52.558 Initializing NVMe Controllers 00:14:52.558 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:52.558 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:52.558 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:52.558 Initialization complete. Launching workers. 00:14:52.558 ======================================================== 00:14:52.558 Latency(us) 00:14:52.558 Device Information : IOPS MiB/s Average min max 00:14:52.558 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3803.98 14.86 262.52 99.03 7175.00 00:14:52.558 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8038.84 6960.77 11988.26 00:14:52.558 ======================================================== 00:14:52.558 Total : 3928.98 15.35 509.92 99.03 11988.26 00:14:52.558 00:14:52.558 17:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:53.935 Initializing NVMe Controllers 00:14:53.935 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:53.935 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:53.935 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:53.935 Initialization complete. Launching workers. 00:14:53.935 ======================================================== 00:14:53.935 Latency(us) 00:14:53.935 Device Information : IOPS MiB/s Average min max 00:14:53.935 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8566.00 33.46 3736.25 598.80 8206.08 00:14:53.935 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4020.59 15.71 7997.42 6688.95 9330.63 00:14:53.935 ======================================================== 00:14:53.935 Total : 12586.59 49.17 5097.42 598.80 9330.63 00:14:53.935 00:14:53.935 17:00:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:53.935 17:00:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:56.465 Initializing NVMe Controllers 00:14:56.465 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:56.465 Controller IO queue size 128, less than required. 00:14:56.465 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:56.465 Controller IO queue size 128, less than required. 00:14:56.465 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:56.465 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:56.465 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:56.465 Initialization complete. Launching workers. 00:14:56.465 ======================================================== 00:14:56.465 Latency(us) 00:14:56.465 Device Information : IOPS MiB/s Average min max 00:14:56.465 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1740.24 435.06 74554.33 42865.82 111554.17 00:14:56.465 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 663.90 165.98 196368.18 72092.42 313635.21 00:14:56.465 ======================================================== 00:14:56.465 Total : 2404.14 601.03 108193.09 42865.82 313635.21 00:14:56.465 00:14:56.466 17:00:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:14:56.725 Initializing NVMe Controllers 00:14:56.725 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:56.725 Controller IO queue size 128, less than required. 00:14:56.725 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:56.725 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:56.725 Controller IO queue size 128, less than required. 00:14:56.725 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:56.725 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:56.725 WARNING: Some requested NVMe devices were skipped 00:14:56.725 No valid NVMe controllers or AIO or URING devices found 00:14:56.725 17:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:14:59.258 Initializing NVMe Controllers 00:14:59.258 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:59.258 Controller IO queue size 128, less than required. 00:14:59.258 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:59.258 Controller IO queue size 128, less than required. 00:14:59.258 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:59.258 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:59.258 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:59.258 Initialization complete. Launching workers. 00:14:59.258 00:14:59.258 ==================== 00:14:59.258 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:59.258 TCP transport: 00:14:59.258 polls: 9584 00:14:59.258 idle_polls: 6418 00:14:59.258 sock_completions: 3166 00:14:59.258 nvme_completions: 6001 00:14:59.258 submitted_requests: 9002 00:14:59.258 queued_requests: 1 00:14:59.258 00:14:59.258 ==================== 00:14:59.258 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:59.258 TCP transport: 00:14:59.258 polls: 9724 00:14:59.258 idle_polls: 5248 00:14:59.258 sock_completions: 4476 00:14:59.258 nvme_completions: 6919 00:14:59.258 submitted_requests: 10348 00:14:59.258 queued_requests: 1 00:14:59.258 ======================================================== 00:14:59.258 Latency(us) 00:14:59.258 Device Information : IOPS MiB/s Average min max 00:14:59.258 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1499.87 374.97 86963.54 46060.17 140139.15 00:14:59.258 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1729.35 432.34 74440.97 38736.19 119004.58 00:14:59.258 ======================================================== 00:14:59.258 Total : 3229.22 807.31 80257.31 38736.19 140139.15 00:14:59.258 00:14:59.258 17:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:14:59.258 17:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.517 17:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:14:59.517 17:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:59.517 17:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:14:59.517 17:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:59.517 17:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:14:59.517 17:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:59.517 17:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:14:59.517 17:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:59.517 17:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:59.517 rmmod nvme_tcp 00:14:59.517 rmmod nvme_fabrics 00:14:59.517 rmmod nvme_keyring 00:14:59.517 17:00:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:59.517 17:00:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:14:59.517 17:00:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:14:59.517 17:00:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74450 ']' 00:14:59.517 17:00:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74450 00:14:59.517 17:00:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74450 ']' 00:14:59.517 17:00:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74450 00:14:59.517 17:00:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:14:59.776 17:00:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:59.776 17:00:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74450 00:14:59.776 killing process with pid 74450 00:14:59.776 17:00:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:59.776 17:00:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:59.776 17:00:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74450' 00:14:59.776 17:00:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74450 00:14:59.776 17:00:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74450 00:15:00.763 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:00.763 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:00.763 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:00.763 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:15:00.763 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:15:00.763 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:00.763 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:15:00.763 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:00.763 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:00.763 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:00.763 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:00.763 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:00.763 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:00.763 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:00.763 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:00.763 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:00.763 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:00.763 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:00.763 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:00.764 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:01.026 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:01.026 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:01.026 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:01.026 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.026 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:01.026 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.026 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:15:01.026 00:15:01.026 real 0m14.822s 00:15:01.026 user 0m53.141s 00:15:01.026 sys 0m4.121s 00:15:01.026 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.026 17:00:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:01.026 ************************************ 00:15:01.026 END TEST nvmf_perf 00:15:01.026 ************************************ 00:15:01.026 17:00:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:01.026 17:00:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:01.026 17:00:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.026 17:00:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:01.026 ************************************ 00:15:01.026 START TEST nvmf_fio_host 00:15:01.026 ************************************ 00:15:01.026 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:01.026 * Looking for test storage... 00:15:01.026 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:01.026 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:01.026 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:15:01.026 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:01.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.286 --rc genhtml_branch_coverage=1 00:15:01.286 --rc genhtml_function_coverage=1 00:15:01.286 --rc genhtml_legend=1 00:15:01.286 --rc geninfo_all_blocks=1 00:15:01.286 --rc geninfo_unexecuted_blocks=1 00:15:01.286 00:15:01.286 ' 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:01.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.286 --rc genhtml_branch_coverage=1 00:15:01.286 --rc genhtml_function_coverage=1 00:15:01.286 --rc genhtml_legend=1 00:15:01.286 --rc geninfo_all_blocks=1 00:15:01.286 --rc geninfo_unexecuted_blocks=1 00:15:01.286 00:15:01.286 ' 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:01.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.286 --rc genhtml_branch_coverage=1 00:15:01.286 --rc genhtml_function_coverage=1 00:15:01.286 --rc genhtml_legend=1 00:15:01.286 --rc geninfo_all_blocks=1 00:15:01.286 --rc geninfo_unexecuted_blocks=1 00:15:01.286 00:15:01.286 ' 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:01.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.286 --rc genhtml_branch_coverage=1 00:15:01.286 --rc genhtml_function_coverage=1 00:15:01.286 --rc genhtml_legend=1 00:15:01.286 --rc geninfo_all_blocks=1 00:15:01.286 --rc geninfo_unexecuted_blocks=1 00:15:01.286 00:15:01.286 ' 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:01.286 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:01.287 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:01.287 Cannot find device "nvmf_init_br" 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:01.287 Cannot find device "nvmf_init_br2" 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:01.287 Cannot find device "nvmf_tgt_br" 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:01.287 Cannot find device "nvmf_tgt_br2" 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:01.287 Cannot find device "nvmf_init_br" 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:01.287 Cannot find device "nvmf_init_br2" 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:01.287 Cannot find device "nvmf_tgt_br" 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:01.287 Cannot find device "nvmf_tgt_br2" 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:01.287 Cannot find device "nvmf_br" 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:01.287 Cannot find device "nvmf_init_if" 00:15:01.287 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:15:01.288 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:01.288 Cannot find device "nvmf_init_if2" 00:15:01.288 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:15:01.288 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:01.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:01.288 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:15:01.288 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:01.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:01.288 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:15:01.288 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:01.288 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:01.288 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:01.288 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:01.288 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:01.288 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:01.288 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:01.547 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:01.547 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:01.547 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:01.547 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:01.547 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:01.547 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:01.547 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:01.547 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:01.547 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:01.547 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:01.547 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:01.547 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:01.547 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:01.547 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:01.547 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:01.547 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:01.547 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:01.547 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:01.547 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:01.547 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:01.547 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:01.547 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:01.548 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:01.548 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:15:01.548 00:15:01.548 --- 10.0.0.3 ping statistics --- 00:15:01.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.548 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:01.548 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:01.548 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:15:01.548 00:15:01.548 --- 10.0.0.4 ping statistics --- 00:15:01.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.548 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:01.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:01.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:15:01.548 00:15:01.548 --- 10.0.0.1 ping statistics --- 00:15:01.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.548 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:01.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:01.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:15:01.548 00:15:01.548 --- 10.0.0.2 ping statistics --- 00:15:01.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.548 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74907 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74907 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 74907 ']' 00:15:01.548 17:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.548 17:00:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.548 17:00:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.548 17:00:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.548 17:00:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:01.548 [2024-11-20 17:00:20.057812] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:15:01.548 [2024-11-20 17:00:20.057910] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.807 [2024-11-20 17:00:20.212508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:01.807 [2024-11-20 17:00:20.273988] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.807 [2024-11-20 17:00:20.274254] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.807 [2024-11-20 17:00:20.274413] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.807 [2024-11-20 17:00:20.274484] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.807 [2024-11-20 17:00:20.274637] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.807 [2024-11-20 17:00:20.275910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.807 [2024-11-20 17:00:20.276020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.807 [2024-11-20 17:00:20.276099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.807 [2024-11-20 17:00:20.276779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:01.807 [2024-11-20 17:00:20.333981] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:02.744 17:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:02.744 17:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:15:02.744 17:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:03.003 [2024-11-20 17:00:21.381332] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.003 17:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:03.003 17:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:03.003 17:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:03.003 17:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:03.262 Malloc1 00:15:03.262 17:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:03.520 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:04.088 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:04.088 [2024-11-20 17:00:22.621379] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:04.348 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:04.608 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:04.608 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:04.608 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:04.608 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:04.608 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:04.608 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:04.608 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:04.608 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:15:04.608 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:04.608 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:04.608 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:04.608 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:15:04.608 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:04.608 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:04.608 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:04.608 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:04.608 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:04.608 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:15:04.608 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:04.608 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:04.608 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:04.608 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:04.608 17:00:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:04.608 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:04.608 fio-3.35 00:15:04.608 Starting 1 thread 00:15:07.140 00:15:07.140 test: (groupid=0, jobs=1): err= 0: pid=74990: Wed Nov 20 17:00:25 2024 00:15:07.140 read: IOPS=8753, BW=34.2MiB/s (35.9MB/s)(68.6MiB/2007msec) 00:15:07.140 slat (nsec): min=1928, max=315183, avg=2578.00, stdev=3240.22 00:15:07.140 clat (usec): min=2189, max=13748, avg=7613.89, stdev=547.98 00:15:07.140 lat (usec): min=2228, max=13751, avg=7616.46, stdev=547.69 00:15:07.140 clat percentiles (usec): 00:15:07.140 | 1.00th=[ 6521], 5.00th=[ 6849], 10.00th=[ 6980], 20.00th=[ 7242], 00:15:07.140 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7570], 60.00th=[ 7701], 00:15:07.140 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8225], 95.00th=[ 8455], 00:15:07.140 | 99.00th=[ 8848], 99.50th=[ 9241], 99.90th=[11731], 99.95th=[12780], 00:15:07.140 | 99.99th=[13698] 00:15:07.140 bw ( KiB/s): min=33808, max=35824, per=100.00%, avg=35022.00, stdev=860.60, samples=4 00:15:07.140 iops : min= 8452, max= 8956, avg=8755.50, stdev=215.15, samples=4 00:15:07.140 write: IOPS=8760, BW=34.2MiB/s (35.9MB/s)(68.7MiB/2007msec); 0 zone resets 00:15:07.140 slat (nsec): min=1984, max=207871, avg=2640.06, stdev=2199.12 00:15:07.140 clat (usec): min=2070, max=13531, avg=6946.04, stdev=505.15 00:15:07.140 lat (usec): min=2082, max=13533, avg=6948.68, stdev=505.05 00:15:07.140 clat percentiles (usec): 00:15:07.140 | 1.00th=[ 5932], 5.00th=[ 6259], 10.00th=[ 6390], 20.00th=[ 6587], 00:15:07.140 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6915], 60.00th=[ 7046], 00:15:07.140 | 70.00th=[ 7177], 80.00th=[ 7308], 90.00th=[ 7504], 95.00th=[ 7635], 00:15:07.140 | 99.00th=[ 8094], 99.50th=[ 8455], 99.90th=[10945], 99.95th=[12649], 00:15:07.140 | 99.99th=[13435] 00:15:07.140 bw ( KiB/s): min=34744, max=35520, per=99.96%, avg=35026.00, stdev=340.19, samples=4 00:15:07.141 iops : min= 8686, max= 8880, avg=8756.50, stdev=85.05, samples=4 00:15:07.141 lat (msec) : 4=0.13%, 10=99.64%, 20=0.22% 00:15:07.141 cpu : usr=70.49%, sys=22.68%, ctx=12, majf=0, minf=7 00:15:07.141 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:07.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:07.141 issued rwts: total=17569,17582,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.141 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:07.141 00:15:07.141 Run status group 0 (all jobs): 00:15:07.141 READ: bw=34.2MiB/s (35.9MB/s), 34.2MiB/s-34.2MiB/s (35.9MB/s-35.9MB/s), io=68.6MiB (72.0MB), run=2007-2007msec 00:15:07.141 WRITE: bw=34.2MiB/s (35.9MB/s), 34.2MiB/s-34.2MiB/s (35.9MB/s-35.9MB/s), io=68.7MiB (72.0MB), run=2007-2007msec 00:15:07.141 17:00:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:07.141 17:00:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:07.141 17:00:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:07.141 17:00:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:07.141 17:00:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:07.141 17:00:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:07.141 17:00:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:15:07.141 17:00:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:07.141 17:00:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:07.141 17:00:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:07.141 17:00:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:15:07.141 17:00:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:07.141 17:00:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:07.141 17:00:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:07.141 17:00:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:07.141 17:00:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:07.141 17:00:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:15:07.141 17:00:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:07.141 17:00:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:07.141 17:00:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:07.141 17:00:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:07.141 17:00:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:07.141 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:07.141 fio-3.35 00:15:07.141 Starting 1 thread 00:15:09.691 00:15:09.691 test: (groupid=0, jobs=1): err= 0: pid=75033: Wed Nov 20 17:00:27 2024 00:15:09.691 read: IOPS=8320, BW=130MiB/s (136MB/s)(261MiB/2009msec) 00:15:09.691 slat (usec): min=2, max=659, avg= 3.67, stdev= 5.48 00:15:09.691 clat (usec): min=1790, max=17710, avg=8640.28, stdev=2651.27 00:15:09.691 lat (usec): min=1793, max=17713, avg=8643.96, stdev=2651.30 00:15:09.691 clat percentiles (usec): 00:15:09.691 | 1.00th=[ 4080], 5.00th=[ 4883], 10.00th=[ 5407], 20.00th=[ 6194], 00:15:09.691 | 30.00th=[ 7046], 40.00th=[ 7767], 50.00th=[ 8455], 60.00th=[ 9110], 00:15:09.691 | 70.00th=[ 9896], 80.00th=[10683], 90.00th=[12256], 95.00th=[13566], 00:15:09.691 | 99.00th=[15664], 99.50th=[16319], 99.90th=[17171], 99.95th=[17433], 00:15:09.691 | 99.99th=[17695] 00:15:09.691 bw ( KiB/s): min=61344, max=73152, per=51.01%, avg=67912.00, stdev=5677.78, samples=4 00:15:09.691 iops : min= 3834, max= 4572, avg=4244.50, stdev=354.86, samples=4 00:15:09.691 write: IOPS=4818, BW=75.3MiB/s (78.9MB/s)(139MiB/1841msec); 0 zone resets 00:15:09.691 slat (usec): min=32, max=264, avg=37.72, stdev= 7.43 00:15:09.691 clat (usec): min=6120, max=21534, avg=11982.89, stdev=2390.88 00:15:09.691 lat (usec): min=6152, max=21569, avg=12020.61, stdev=2391.26 00:15:09.691 clat percentiles (usec): 00:15:09.691 | 1.00th=[ 7701], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10028], 00:15:09.691 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11600], 60.00th=[12125], 00:15:09.691 | 70.00th=[12911], 80.00th=[13829], 90.00th=[15401], 95.00th=[16450], 00:15:09.691 | 99.00th=[19268], 99.50th=[20055], 99.90th=[21103], 99.95th=[21365], 00:15:09.691 | 99.99th=[21627] 00:15:09.691 bw ( KiB/s): min=64608, max=75776, per=91.67%, avg=70664.00, stdev=5936.35, samples=4 00:15:09.691 iops : min= 4038, max= 4736, avg=4416.50, stdev=371.02, samples=4 00:15:09.691 lat (msec) : 2=0.02%, 4=0.53%, 10=53.11%, 20=46.16%, 50=0.18% 00:15:09.691 cpu : usr=81.88%, sys=14.19%, ctx=6, majf=0, minf=12 00:15:09.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:09.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:09.691 issued rwts: total=16716,8870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.691 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:09.691 00:15:09.691 Run status group 0 (all jobs): 00:15:09.691 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=261MiB (274MB), run=2009-2009msec 00:15:09.691 WRITE: bw=75.3MiB/s (78.9MB/s), 75.3MiB/s-75.3MiB/s (78.9MB/s-78.9MB/s), io=139MiB (145MB), run=1841-1841msec 00:15:09.691 17:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:09.950 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:09.950 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:09.950 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:09.950 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:09.950 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:09.950 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:15:09.950 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:09.950 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:15:09.950 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:09.950 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:09.950 rmmod nvme_tcp 00:15:09.950 rmmod nvme_fabrics 00:15:09.950 rmmod nvme_keyring 00:15:09.950 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:09.950 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:15:09.950 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:15:09.950 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74907 ']' 00:15:09.950 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74907 00:15:09.950 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 74907 ']' 00:15:09.950 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 74907 00:15:09.950 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:15:09.950 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:09.950 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74907 00:15:09.950 killing process with pid 74907 00:15:09.951 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:09.951 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:09.951 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74907' 00:15:09.951 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 74907 00:15:09.951 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 74907 00:15:10.210 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:10.210 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:10.210 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:10.210 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:15:10.210 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:10.210 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:15:10.210 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:15:10.210 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:10.210 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:10.210 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:10.210 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:10.210 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:10.210 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:10.210 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:10.210 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:10.210 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:10.210 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:10.210 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:10.469 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:10.469 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:10.469 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:10.469 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:10.469 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:10.469 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.469 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:10.469 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.469 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:15:10.469 00:15:10.469 real 0m9.458s 00:15:10.469 user 0m38.023s 00:15:10.469 sys 0m2.475s 00:15:10.469 ************************************ 00:15:10.469 END TEST nvmf_fio_host 00:15:10.469 ************************************ 00:15:10.469 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:10.469 17:00:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:10.469 17:00:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:10.469 17:00:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:10.469 17:00:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:10.469 17:00:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:10.469 ************************************ 00:15:10.469 START TEST nvmf_failover 00:15:10.469 ************************************ 00:15:10.469 17:00:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:10.729 * Looking for test storage... 00:15:10.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:10.729 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:10.729 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:15:10.729 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:10.729 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:10.729 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:10.729 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:10.729 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:10.729 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:15:10.729 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:15:10.729 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:15:10.729 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:15:10.729 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:15:10.729 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:15:10.729 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:15:10.729 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:10.729 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:15:10.729 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:15:10.729 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:10.729 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:10.729 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:15:10.729 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:15:10.729 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:10.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.730 --rc genhtml_branch_coverage=1 00:15:10.730 --rc genhtml_function_coverage=1 00:15:10.730 --rc genhtml_legend=1 00:15:10.730 --rc geninfo_all_blocks=1 00:15:10.730 --rc geninfo_unexecuted_blocks=1 00:15:10.730 00:15:10.730 ' 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:10.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.730 --rc genhtml_branch_coverage=1 00:15:10.730 --rc genhtml_function_coverage=1 00:15:10.730 --rc genhtml_legend=1 00:15:10.730 --rc geninfo_all_blocks=1 00:15:10.730 --rc geninfo_unexecuted_blocks=1 00:15:10.730 00:15:10.730 ' 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:10.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.730 --rc genhtml_branch_coverage=1 00:15:10.730 --rc genhtml_function_coverage=1 00:15:10.730 --rc genhtml_legend=1 00:15:10.730 --rc geninfo_all_blocks=1 00:15:10.730 --rc geninfo_unexecuted_blocks=1 00:15:10.730 00:15:10.730 ' 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:10.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.730 --rc genhtml_branch_coverage=1 00:15:10.730 --rc genhtml_function_coverage=1 00:15:10.730 --rc genhtml_legend=1 00:15:10.730 --rc geninfo_all_blocks=1 00:15:10.730 --rc geninfo_unexecuted_blocks=1 00:15:10.730 00:15:10.730 ' 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:10.730 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:10.730 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:10.731 Cannot find device "nvmf_init_br" 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:10.731 Cannot find device "nvmf_init_br2" 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:10.731 Cannot find device "nvmf_tgt_br" 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:10.731 Cannot find device "nvmf_tgt_br2" 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:10.731 Cannot find device "nvmf_init_br" 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:10.731 Cannot find device "nvmf_init_br2" 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:10.731 Cannot find device "nvmf_tgt_br" 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:10.731 Cannot find device "nvmf_tgt_br2" 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:10.731 Cannot find device "nvmf_br" 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:10.731 Cannot find device "nvmf_init_if" 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:15:10.731 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:10.989 Cannot find device "nvmf_init_if2" 00:15:10.989 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:15:10.989 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:10.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:10.989 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:15:10.989 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:10.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:10.989 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:15:10.989 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:10.989 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:10.989 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:10.989 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:10.989 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:10.989 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:10.989 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:10.989 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:10.989 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:10.989 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:10.989 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:10.989 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:10.990 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:10.990 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:15:10.990 00:15:10.990 --- 10.0.0.3 ping statistics --- 00:15:10.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.990 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:10.990 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:10.990 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:15:10.990 00:15:10.990 --- 10.0.0.4 ping statistics --- 00:15:10.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.990 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:10.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:10.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:15:10.990 00:15:10.990 --- 10.0.0.1 ping statistics --- 00:15:10.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.990 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:10.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:10.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:15:10.990 00:15:10.990 --- 10.0.0.2 ping statistics --- 00:15:10.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.990 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:10.990 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:11.248 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:11.248 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:11.248 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:11.248 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:11.248 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:11.248 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75306 00:15:11.248 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75306 00:15:11.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.249 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75306 ']' 00:15:11.249 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.249 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:11.249 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.249 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:11.249 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:11.249 [2024-11-20 17:00:29.591698] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:15:11.249 [2024-11-20 17:00:29.591994] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.249 [2024-11-20 17:00:29.744632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:11.507 [2024-11-20 17:00:29.804306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.507 [2024-11-20 17:00:29.804568] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.507 [2024-11-20 17:00:29.804747] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.507 [2024-11-20 17:00:29.804804] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.507 [2024-11-20 17:00:29.804959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.507 [2024-11-20 17:00:29.806242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.507 [2024-11-20 17:00:29.806396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:11.507 [2024-11-20 17:00:29.806402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.507 [2024-11-20 17:00:29.866539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:11.507 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:11.507 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:15:11.507 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:11.507 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:11.507 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:11.507 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.507 17:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:11.766 [2024-11-20 17:00:30.259998] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.766 17:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:12.334 Malloc0 00:15:12.334 17:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:12.592 17:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:12.850 17:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:13.110 [2024-11-20 17:00:31.412365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:13.110 17:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:13.368 [2024-11-20 17:00:31.672594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:13.368 17:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:13.627 [2024-11-20 17:00:32.001006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:13.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:13.627 17:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75357 00:15:13.627 17:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:13.627 17:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:13.627 17:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75357 /var/tmp/bdevperf.sock 00:15:13.627 17:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75357 ']' 00:15:13.627 17:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:13.627 17:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.627 17:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:13.627 17:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.627 17:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:14.194 17:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.194 17:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:15:14.194 17:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:14.454 NVMe0n1 00:15:14.454 17:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:14.713 00:15:14.713 17:00:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75373 00:15:14.713 17:00:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:14.713 17:00:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:16.089 17:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:16.089 [2024-11-20 17:00:34.487231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6d30 is same with the state(6) to be set 00:15:16.089 [2024-11-20 17:00:34.487572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6d30 is same with the state(6) to be set 00:15:16.089 [2024-11-20 17:00:34.487593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6d30 is same with the state(6) to be set 00:15:16.089 17:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:19.377 17:00:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:19.377 00:15:19.377 17:00:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:19.945 17:00:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:23.233 17:00:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:23.233 [2024-11-20 17:00:41.533862] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:23.233 17:00:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:24.174 17:00:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:24.432 17:00:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75373 00:15:31.005 { 00:15:31.005 "results": [ 00:15:31.005 { 00:15:31.005 "job": "NVMe0n1", 00:15:31.005 "core_mask": "0x1", 00:15:31.005 "workload": "verify", 00:15:31.005 "status": "finished", 00:15:31.005 "verify_range": { 00:15:31.005 "start": 0, 00:15:31.005 "length": 16384 00:15:31.005 }, 00:15:31.005 "queue_depth": 128, 00:15:31.005 "io_size": 4096, 00:15:31.005 "runtime": 15.010372, 00:15:31.005 "iops": 8689.924540177952, 00:15:31.005 "mibps": 33.945017735070124, 00:15:31.005 "io_failed": 3349, 00:15:31.005 "io_timeout": 0, 00:15:31.005 "avg_latency_us": 14327.725257163978, 00:15:31.005 "min_latency_us": 629.2945454545454, 00:15:31.005 "max_latency_us": 16086.10909090909 00:15:31.005 } 00:15:31.005 ], 00:15:31.005 "core_count": 1 00:15:31.005 } 00:15:31.005 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75357 00:15:31.005 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75357 ']' 00:15:31.005 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75357 00:15:31.005 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:15:31.005 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.005 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75357 00:15:31.005 killing process with pid 75357 00:15:31.005 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:31.005 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:31.005 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75357' 00:15:31.005 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75357 00:15:31.005 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75357 00:15:31.005 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:31.005 [2024-11-20 17:00:32.078943] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:15:31.005 [2024-11-20 17:00:32.079051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75357 ] 00:15:31.005 [2024-11-20 17:00:32.230511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.005 [2024-11-20 17:00:32.295350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.005 [2024-11-20 17:00:32.355113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:31.005 Running I/O for 15 seconds... 00:15:31.005 6450.00 IOPS, 25.20 MiB/s [2024-11-20T17:00:49.541Z] [2024-11-20 17:00:34.487719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.005 [2024-11-20 17:00:34.487773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.005 [2024-11-20 17:00:34.487802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.005 [2024-11-20 17:00:34.487818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.005 [2024-11-20 17:00:34.487835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.005 [2024-11-20 17:00:34.487850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.005 [2024-11-20 17:00:34.487867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.005 [2024-11-20 17:00:34.487881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.005 [2024-11-20 17:00:34.487897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.005 [2024-11-20 17:00:34.487924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.005 [2024-11-20 17:00:34.487945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.005 [2024-11-20 17:00:34.487960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.005 [2024-11-20 17:00:34.487976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.005 [2024-11-20 17:00:34.487990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.005 [2024-11-20 17:00:34.488005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.005 [2024-11-20 17:00:34.488020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.005 [2024-11-20 17:00:34.488035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.005 [2024-11-20 17:00:34.488050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.005 [2024-11-20 17:00:34.488065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.005 [2024-11-20 17:00:34.488079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.005 [2024-11-20 17:00:34.488094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.005 [2024-11-20 17:00:34.488140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.005 [2024-11-20 17:00:34.488158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.005 [2024-11-20 17:00:34.488172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.005 [2024-11-20 17:00:34.488200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.005 [2024-11-20 17:00:34.488218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.005 [2024-11-20 17:00:34.488234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.005 [2024-11-20 17:00:34.488258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.005 [2024-11-20 17:00:34.488279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.005 [2024-11-20 17:00:34.488294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.005 [2024-11-20 17:00:34.488310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.005 [2024-11-20 17:00:34.488325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.005 [2024-11-20 17:00:34.488341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.005 [2024-11-20 17:00:34.488364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.005 [2024-11-20 17:00:34.488381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.006 [2024-11-20 17:00:34.488396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.488412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.006 [2024-11-20 17:00:34.488426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.488442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.006 [2024-11-20 17:00:34.488455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.488471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.006 [2024-11-20 17:00:34.488486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.488501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.006 [2024-11-20 17:00:34.488516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.488532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.006 [2024-11-20 17:00:34.488546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.488573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.006 [2024-11-20 17:00:34.488588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.488604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.006 [2024-11-20 17:00:34.488618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.488633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.006 [2024-11-20 17:00:34.488647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.488663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.006 [2024-11-20 17:00:34.488677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.488692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.006 [2024-11-20 17:00:34.488706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.488722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.006 [2024-11-20 17:00:34.488736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.488751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.006 [2024-11-20 17:00:34.488765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.488781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.006 [2024-11-20 17:00:34.488795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.488810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.006 [2024-11-20 17:00:34.488835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.488856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.006 [2024-11-20 17:00:34.488875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.488891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.006 [2024-11-20 17:00:34.488906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.488921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.006 [2024-11-20 17:00:34.488935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.488951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.006 [2024-11-20 17:00:34.488973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.488991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.006 [2024-11-20 17:00:34.489005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.489020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.006 [2024-11-20 17:00:34.489034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.489049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.006 [2024-11-20 17:00:34.489063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.489078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.006 [2024-11-20 17:00:34.489092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.489107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.006 [2024-11-20 17:00:34.489121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.489137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.006 [2024-11-20 17:00:34.489151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.489166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.006 [2024-11-20 17:00:34.489179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.489194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.006 [2024-11-20 17:00:34.489208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.489224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.006 [2024-11-20 17:00:34.489238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.489254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.006 [2024-11-20 17:00:34.489267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.489283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.006 [2024-11-20 17:00:34.489297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.489313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.006 [2024-11-20 17:00:34.489327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.489343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.006 [2024-11-20 17:00:34.489369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.489386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.006 [2024-11-20 17:00:34.489401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.489416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.006 [2024-11-20 17:00:34.489437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.489452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.006 [2024-11-20 17:00:34.489467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.489482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.006 [2024-11-20 17:00:34.489496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.489512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.006 [2024-11-20 17:00:34.489525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.489541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.006 [2024-11-20 17:00:34.489555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.489570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.006 [2024-11-20 17:00:34.489583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.006 [2024-11-20 17:00:34.489599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.006 [2024-11-20 17:00:34.489612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.489628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.489641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.489657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.489670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.489686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.489699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.489715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.489728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.489750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.489765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.489780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.489794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.489810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.489834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.489853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.489872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.489889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.489903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.489918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.489932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.489948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.489961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.489992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.490006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.490034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.490062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.490091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.490119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.490163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.490210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.490239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.007 [2024-11-20 17:00:34.490269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.007 [2024-11-20 17:00:34.490298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.007 [2024-11-20 17:00:34.490327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.007 [2024-11-20 17:00:34.490356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.007 [2024-11-20 17:00:34.490399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.007 [2024-11-20 17:00:34.490429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.007 [2024-11-20 17:00:34.490458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.007 [2024-11-20 17:00:34.490488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.490517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.490546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.490585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.490614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.490643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.490679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.490708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.490737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.490766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.490796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.490825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.490866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.007 [2024-11-20 17:00:34.490889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.007 [2024-11-20 17:00:34.490909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.490926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.008 [2024-11-20 17:00:34.490941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.490956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.008 [2024-11-20 17:00:34.490970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.490993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.008 [2024-11-20 17:00:34.491008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.491024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.008 [2024-11-20 17:00:34.491038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.491054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.008 [2024-11-20 17:00:34.491068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.491084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.008 [2024-11-20 17:00:34.491104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.491119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.008 [2024-11-20 17:00:34.491133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.491148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.008 [2024-11-20 17:00:34.491172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.491187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.008 [2024-11-20 17:00:34.491201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.491217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.008 [2024-11-20 17:00:34.491231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.491245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab2c90 is same with the state(6) to be set 00:15:31.008 [2024-11-20 17:00:34.491262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.008 [2024-11-20 17:00:34.491272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.008 [2024-11-20 17:00:34.491283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64176 len:8 PRP1 0x0 PRP2 0x0 00:15:31.008 [2024-11-20 17:00:34.491297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.491311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.008 [2024-11-20 17:00:34.491321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.008 [2024-11-20 17:00:34.491332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64696 len:8 PRP1 0x0 PRP2 0x0 00:15:31.008 [2024-11-20 17:00:34.491345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.491359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.008 [2024-11-20 17:00:34.491374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.008 [2024-11-20 17:00:34.491397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64704 len:8 PRP1 0x0 PRP2 0x0 00:15:31.008 [2024-11-20 17:00:34.491411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.491425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.008 [2024-11-20 17:00:34.491435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.008 [2024-11-20 17:00:34.491446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64712 len:8 PRP1 0x0 PRP2 0x0 00:15:31.008 [2024-11-20 17:00:34.491459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.491473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.008 [2024-11-20 17:00:34.491482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.008 [2024-11-20 17:00:34.491493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64720 len:8 PRP1 0x0 PRP2 0x0 00:15:31.008 [2024-11-20 17:00:34.491506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.491520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.008 [2024-11-20 17:00:34.491530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.008 [2024-11-20 17:00:34.491540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64728 len:8 PRP1 0x0 PRP2 0x0 00:15:31.008 [2024-11-20 17:00:34.491553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.491566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.008 [2024-11-20 17:00:34.491576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.008 [2024-11-20 17:00:34.491586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64736 len:8 PRP1 0x0 PRP2 0x0 00:15:31.008 [2024-11-20 17:00:34.491599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.491613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.008 [2024-11-20 17:00:34.491623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.008 [2024-11-20 17:00:34.491633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64744 len:8 PRP1 0x0 PRP2 0x0 00:15:31.008 [2024-11-20 17:00:34.491646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.491660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.008 [2024-11-20 17:00:34.491679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.008 [2024-11-20 17:00:34.491689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64752 len:8 PRP1 0x0 PRP2 0x0 00:15:31.008 [2024-11-20 17:00:34.491702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.491715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.008 [2024-11-20 17:00:34.491725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.008 [2024-11-20 17:00:34.491735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64760 len:8 PRP1 0x0 PRP2 0x0 00:15:31.008 [2024-11-20 17:00:34.491753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.491773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.008 [2024-11-20 17:00:34.491789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.008 [2024-11-20 17:00:34.491805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64768 len:8 PRP1 0x0 PRP2 0x0 00:15:31.008 [2024-11-20 17:00:34.491818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.491845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.008 [2024-11-20 17:00:34.491856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.008 [2024-11-20 17:00:34.491866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64776 len:8 PRP1 0x0 PRP2 0x0 00:15:31.008 [2024-11-20 17:00:34.491880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.491894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.008 [2024-11-20 17:00:34.491904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.008 [2024-11-20 17:00:34.491915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64784 len:8 PRP1 0x0 PRP2 0x0 00:15:31.008 [2024-11-20 17:00:34.491928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.491941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.008 [2024-11-20 17:00:34.491951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.008 [2024-11-20 17:00:34.491961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64792 len:8 PRP1 0x0 PRP2 0x0 00:15:31.008 [2024-11-20 17:00:34.491974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.491988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.008 [2024-11-20 17:00:34.491998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.008 [2024-11-20 17:00:34.492008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64800 len:8 PRP1 0x0 PRP2 0x0 00:15:31.008 [2024-11-20 17:00:34.492021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.492035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.008 [2024-11-20 17:00:34.492045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.008 [2024-11-20 17:00:34.492055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64808 len:8 PRP1 0x0 PRP2 0x0 00:15:31.008 [2024-11-20 17:00:34.492069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.008 [2024-11-20 17:00:34.492082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.008 [2024-11-20 17:00:34.492092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.008 [2024-11-20 17:00:34.492102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64816 len:8 PRP1 0x0 PRP2 0x0 00:15:31.008 [2024-11-20 17:00:34.492115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:34.492129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.009 [2024-11-20 17:00:34.492139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.009 [2024-11-20 17:00:34.492149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64824 len:8 PRP1 0x0 PRP2 0x0 00:15:31.009 [2024-11-20 17:00:34.492162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:34.492192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.009 [2024-11-20 17:00:34.492210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.009 [2024-11-20 17:00:34.492226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64832 len:8 PRP1 0x0 PRP2 0x0 00:15:31.009 [2024-11-20 17:00:34.492240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:34.492254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.009 [2024-11-20 17:00:34.492269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.009 [2024-11-20 17:00:34.492279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64840 len:8 PRP1 0x0 PRP2 0x0 00:15:31.009 [2024-11-20 17:00:34.492292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:34.492316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.009 [2024-11-20 17:00:34.492326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.009 [2024-11-20 17:00:34.492336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64848 len:8 PRP1 0x0 PRP2 0x0 00:15:31.009 [2024-11-20 17:00:34.492350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:34.492412] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:31.009 [2024-11-20 17:00:34.492472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.009 [2024-11-20 17:00:34.492495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:34.492511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.009 [2024-11-20 17:00:34.492525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:34.492540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.009 [2024-11-20 17:00:34.492554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:34.492569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.009 [2024-11-20 17:00:34.492582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:34.492596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:15:31.009 [2024-11-20 17:00:34.496500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:31.009 [2024-11-20 17:00:34.496554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa18710 (9): Bad file descriptor 00:15:31.009 [2024-11-20 17:00:34.526148] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:15:31.009 7417.00 IOPS, 28.97 MiB/s [2024-11-20T17:00:49.545Z] 8032.67 IOPS, 31.38 MiB/s [2024-11-20T17:00:49.545Z] 8302.50 IOPS, 32.43 MiB/s [2024-11-20T17:00:49.545Z] [2024-11-20 17:00:38.198449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.009 [2024-11-20 17:00:38.198552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:38.198621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.009 [2024-11-20 17:00:38.198639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:38.198654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.009 [2024-11-20 17:00:38.198668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:38.198683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.009 [2024-11-20 17:00:38.198712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:38.198726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.009 [2024-11-20 17:00:38.198739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:38.198753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.009 [2024-11-20 17:00:38.198766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:38.198781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.009 [2024-11-20 17:00:38.198795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:38.198809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.009 [2024-11-20 17:00:38.198822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:38.198837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.009 [2024-11-20 17:00:38.198850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:38.198897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.009 [2024-11-20 17:00:38.198912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:38.198926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.009 [2024-11-20 17:00:38.198939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:38.198953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.009 [2024-11-20 17:00:38.198965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:38.198980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.009 [2024-11-20 17:00:38.198993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:38.199007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.009 [2024-11-20 17:00:38.199029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:38.199062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.009 [2024-11-20 17:00:38.199075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:38.199089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.009 [2024-11-20 17:00:38.199102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:38.199132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.009 [2024-11-20 17:00:38.199162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:38.199180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.009 [2024-11-20 17:00:38.199194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:38.199210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.009 [2024-11-20 17:00:38.199223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.009 [2024-11-20 17:00:38.199239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.009 [2024-11-20 17:00:38.199253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.199268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.010 [2024-11-20 17:00:38.199282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.199297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.010 [2024-11-20 17:00:38.199311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.199326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.010 [2024-11-20 17:00:38.199340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.199355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.010 [2024-11-20 17:00:38.199368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.199383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.010 [2024-11-20 17:00:38.199397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.199413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.010 [2024-11-20 17:00:38.199426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.199441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.010 [2024-11-20 17:00:38.199461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.199478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.010 [2024-11-20 17:00:38.199492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.199507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.010 [2024-11-20 17:00:38.199521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.199536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.010 [2024-11-20 17:00:38.199550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.199565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.010 [2024-11-20 17:00:38.199579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.199604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.010 [2024-11-20 17:00:38.199617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.199633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.010 [2024-11-20 17:00:38.199647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.199663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.010 [2024-11-20 17:00:38.199677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.199693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.010 [2024-11-20 17:00:38.199707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.199723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.010 [2024-11-20 17:00:38.199737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.199752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.010 [2024-11-20 17:00:38.199766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.199781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.010 [2024-11-20 17:00:38.199795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.199810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.010 [2024-11-20 17:00:38.199824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.199846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.010 [2024-11-20 17:00:38.199861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.199876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.010 [2024-11-20 17:00:38.199907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.199924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.010 [2024-11-20 17:00:38.199938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.199954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.010 [2024-11-20 17:00:38.199967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.199983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.010 [2024-11-20 17:00:38.199996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.200012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.010 [2024-11-20 17:00:38.200026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.200042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.010 [2024-11-20 17:00:38.200056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.200071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.010 [2024-11-20 17:00:38.200085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.200101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.010 [2024-11-20 17:00:38.200114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.200130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.010 [2024-11-20 17:00:38.200144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.200160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.010 [2024-11-20 17:00:38.200174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.200200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.010 [2024-11-20 17:00:38.200216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.200232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.010 [2024-11-20 17:00:38.200265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.200281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.010 [2024-11-20 17:00:38.200295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.200310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.010 [2024-11-20 17:00:38.200325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.200340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.010 [2024-11-20 17:00:38.200354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.200369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.010 [2024-11-20 17:00:38.200383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.200398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.010 [2024-11-20 17:00:38.200412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.200428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.010 [2024-11-20 17:00:38.200441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.200457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.010 [2024-11-20 17:00:38.200470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.010 [2024-11-20 17:00:38.200486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.010 [2024-11-20 17:00:38.200499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.200515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.011 [2024-11-20 17:00:38.200528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.200544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.011 [2024-11-20 17:00:38.200557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.200573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.011 [2024-11-20 17:00:38.200586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.200602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.011 [2024-11-20 17:00:38.200616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.200639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.011 [2024-11-20 17:00:38.200654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.200670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.011 [2024-11-20 17:00:38.200684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.200699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.011 [2024-11-20 17:00:38.200713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.200729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.011 [2024-11-20 17:00:38.200743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.200760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.011 [2024-11-20 17:00:38.200774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.200790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.011 [2024-11-20 17:00:38.200804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.200819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.011 [2024-11-20 17:00:38.200848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.200865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.011 [2024-11-20 17:00:38.200880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.200895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.011 [2024-11-20 17:00:38.200909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.200925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.011 [2024-11-20 17:00:38.200939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.200954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.011 [2024-11-20 17:00:38.200969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.200985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.011 [2024-11-20 17:00:38.200998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.201014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.011 [2024-11-20 17:00:38.201034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.201051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.011 [2024-11-20 17:00:38.201065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.201080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.011 [2024-11-20 17:00:38.201094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.201109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.011 [2024-11-20 17:00:38.201123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.201139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.011 [2024-11-20 17:00:38.201154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.201170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.011 [2024-11-20 17:00:38.201184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.201200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.011 [2024-11-20 17:00:38.201214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.201229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.011 [2024-11-20 17:00:38.201243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.201259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.011 [2024-11-20 17:00:38.201273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.201288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.011 [2024-11-20 17:00:38.201303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.201318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.011 [2024-11-20 17:00:38.201332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.201347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.011 [2024-11-20 17:00:38.201361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.201377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.011 [2024-11-20 17:00:38.201394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.201409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.011 [2024-11-20 17:00:38.201433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.201449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.011 [2024-11-20 17:00:38.201463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.201479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.011 [2024-11-20 17:00:38.201493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.201508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.011 [2024-11-20 17:00:38.201523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.201539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.011 [2024-11-20 17:00:38.201553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.201568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.011 [2024-11-20 17:00:38.201582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.201597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.011 [2024-11-20 17:00:38.201611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.201627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.011 [2024-11-20 17:00:38.201641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.201664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.011 [2024-11-20 17:00:38.201680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.201695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.011 [2024-11-20 17:00:38.201709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.011 [2024-11-20 17:00:38.201725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.012 [2024-11-20 17:00:38.201739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.201755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.012 [2024-11-20 17:00:38.201769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.201784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.012 [2024-11-20 17:00:38.201798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.201820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.012 [2024-11-20 17:00:38.201848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.201864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.012 [2024-11-20 17:00:38.201879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.201894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.012 [2024-11-20 17:00:38.201908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.201924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.012 [2024-11-20 17:00:38.201939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.201955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.012 [2024-11-20 17:00:38.201968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.201984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.012 [2024-11-20 17:00:38.201999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.012 [2024-11-20 17:00:38.202028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.012 [2024-11-20 17:00:38.202058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.012 [2024-11-20 17:00:38.202088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.012 [2024-11-20 17:00:38.202118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.012 [2024-11-20 17:00:38.202148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.012 [2024-11-20 17:00:38.202183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.012 [2024-11-20 17:00:38.202220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.012 [2024-11-20 17:00:38.202251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.012 [2024-11-20 17:00:38.202280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.012 [2024-11-20 17:00:38.202309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.012 [2024-11-20 17:00:38.202339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.012 [2024-11-20 17:00:38.202368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.012 [2024-11-20 17:00:38.202397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.012 [2024-11-20 17:00:38.202427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.012 [2024-11-20 17:00:38.202456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.012 [2024-11-20 17:00:38.202485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.012 [2024-11-20 17:00:38.202515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.012 [2024-11-20 17:00:38.202544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.012 [2024-11-20 17:00:38.202574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab6af0 is same with the state(6) to be set 00:15:31.012 [2024-11-20 17:00:38.202631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.012 [2024-11-20 17:00:38.202642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.012 [2024-11-20 17:00:38.202659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75512 len:8 PRP1 0x0 PRP2 0x0 00:15:31.012 [2024-11-20 17:00:38.202679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202743] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:15:31.012 [2024-11-20 17:00:38.202804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.012 [2024-11-20 17:00:38.202857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.012 [2024-11-20 17:00:38.202889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.012 [2024-11-20 17:00:38.202916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.012 [2024-11-20 17:00:38.202943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:38.202957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:15:31.012 [2024-11-20 17:00:38.203008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa18710 (9): Bad file descriptor 00:15:31.012 [2024-11-20 17:00:38.206880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:15:31.012 [2024-11-20 17:00:38.230770] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:15:31.012 8318.00 IOPS, 32.49 MiB/s [2024-11-20T17:00:49.548Z] 8396.00 IOPS, 32.80 MiB/s [2024-11-20T17:00:49.548Z] 8453.71 IOPS, 33.02 MiB/s [2024-11-20T17:00:49.548Z] 8506.50 IOPS, 33.23 MiB/s [2024-11-20T17:00:49.548Z] 8582.22 IOPS, 33.52 MiB/s [2024-11-20T17:00:49.548Z] [2024-11-20 17:00:42.858050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.012 [2024-11-20 17:00:42.858122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:42.858160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.012 [2024-11-20 17:00:42.858174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.012 [2024-11-20 17:00:42.858187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.013 [2024-11-20 17:00:42.858200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.858214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.013 [2024-11-20 17:00:42.858227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.858269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa18710 is same with the state(6) to be set 00:15:31.013 [2024-11-20 17:00:42.858902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.013 [2024-11-20 17:00:42.858931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.858955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.013 [2024-11-20 17:00:42.858981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.858996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.013 [2024-11-20 17:00:42.859009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.013 [2024-11-20 17:00:42.859036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.013 [2024-11-20 17:00:42.859063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.013 [2024-11-20 17:00:42.859090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.013 [2024-11-20 17:00:42.859117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.013 [2024-11-20 17:00:42.859145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.013 [2024-11-20 17:00:42.859172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.013 [2024-11-20 17:00:42.859217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.013 [2024-11-20 17:00:42.859243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.013 [2024-11-20 17:00:42.859270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.013 [2024-11-20 17:00:42.859311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.013 [2024-11-20 17:00:42.859338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.013 [2024-11-20 17:00:42.859368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.013 [2024-11-20 17:00:42.859396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.013 [2024-11-20 17:00:42.859423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.013 [2024-11-20 17:00:42.859450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.013 [2024-11-20 17:00:42.859493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.013 [2024-11-20 17:00:42.859521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.013 [2024-11-20 17:00:42.859548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.013 [2024-11-20 17:00:42.859576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.013 [2024-11-20 17:00:42.859603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.013 [2024-11-20 17:00:42.859630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.013 [2024-11-20 17:00:42.859665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.013 [2024-11-20 17:00:42.859711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.013 [2024-11-20 17:00:42.859741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.013 [2024-11-20 17:00:42.859770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.013 [2024-11-20 17:00:42.859798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.013 [2024-11-20 17:00:42.859827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.013 [2024-11-20 17:00:42.859856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.013 [2024-11-20 17:00:42.859885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.013 [2024-11-20 17:00:42.859929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.013 [2024-11-20 17:00:42.859959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.859973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.013 [2024-11-20 17:00:42.859987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.860002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.013 [2024-11-20 17:00:42.860016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.860030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.013 [2024-11-20 17:00:42.860058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.013 [2024-11-20 17:00:42.860073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.013 [2024-11-20 17:00:42.860094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.860122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.860150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.860178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.860237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.860267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.860296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.860326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.860362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.860393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.860423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.014 [2024-11-20 17:00:42.860452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.014 [2024-11-20 17:00:42.860482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.014 [2024-11-20 17:00:42.860520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.014 [2024-11-20 17:00:42.860549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.014 [2024-11-20 17:00:42.860594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.014 [2024-11-20 17:00:42.860623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.014 [2024-11-20 17:00:42.860652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.014 [2024-11-20 17:00:42.860680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.860708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.860736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.860780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.860807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.860834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.860861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.860912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.860941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.860969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.860984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.860997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.861012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.861024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.861039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.861052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.861066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.861079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.861093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.861106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.861121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.861134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.014 [2024-11-20 17:00:42.861165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.014 [2024-11-20 17:00:42.861179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.861207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.861236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.861264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.861300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.861329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.861357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.861387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.861416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.861445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.861473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.861516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.861545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.861572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.861600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.861628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.861655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.861691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.861718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.861745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.861773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.861801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.861829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.861876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.861907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.015 [2024-11-20 17:00:42.861936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.015 [2024-11-20 17:00:42.861965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.861980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.015 [2024-11-20 17:00:42.861993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.862008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.015 [2024-11-20 17:00:42.862021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.862036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.015 [2024-11-20 17:00:42.862056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.862072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.015 [2024-11-20 17:00:42.862085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.862100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.015 [2024-11-20 17:00:42.862112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.862127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.015 [2024-11-20 17:00:42.862140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.862173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.015 [2024-11-20 17:00:42.862186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.862201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.015 [2024-11-20 17:00:42.862214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.862229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.015 [2024-11-20 17:00:42.862243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.862258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:31.015 [2024-11-20 17:00:42.862272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.862287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.862300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.862315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.862329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.862344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.862363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.015 [2024-11-20 17:00:42.862378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.015 [2024-11-20 17:00:42.862392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.016 [2024-11-20 17:00:42.862408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.016 [2024-11-20 17:00:42.862423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.016 [2024-11-20 17:00:42.862445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.016 [2024-11-20 17:00:42.862459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.016 [2024-11-20 17:00:42.862474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.016 [2024-11-20 17:00:42.862487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.016 [2024-11-20 17:00:42.862501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab6e70 is same with the state(6) to be set 00:15:31.016 [2024-11-20 17:00:42.862518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.016 [2024-11-20 17:00:42.862528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.016 [2024-11-20 17:00:42.862539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20840 len:8 PRP1 0x0 PRP2 0x0 00:15:31.016 [2024-11-20 17:00:42.862552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.016 [2024-11-20 17:00:42.862566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.016 [2024-11-20 17:00:42.862576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.016 [2024-11-20 17:00:42.862586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21264 len:8 PRP1 0x0 PRP2 0x0 00:15:31.016 [2024-11-20 17:00:42.862599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.016 [2024-11-20 17:00:42.862612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.016 [2024-11-20 17:00:42.862622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.016 [2024-11-20 17:00:42.862632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21272 len:8 PRP1 0x0 PRP2 0x0 00:15:31.016 [2024-11-20 17:00:42.862644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.016 [2024-11-20 17:00:42.862657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.016 [2024-11-20 17:00:42.862667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.016 [2024-11-20 17:00:42.862677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:8 PRP1 0x0 PRP2 0x0 00:15:31.016 [2024-11-20 17:00:42.862690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.016 [2024-11-20 17:00:42.862704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.016 [2024-11-20 17:00:42.862713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.016 [2024-11-20 17:00:42.862723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21288 len:8 PRP1 0x0 PRP2 0x0 00:15:31.016 [2024-11-20 17:00:42.862744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.016 [2024-11-20 17:00:42.862758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.016 [2024-11-20 17:00:42.862768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.016 [2024-11-20 17:00:42.862782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21296 len:8 PRP1 0x0 PRP2 0x0 00:15:31.016 [2024-11-20 17:00:42.862795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.016 [2024-11-20 17:00:42.862809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.016 [2024-11-20 17:00:42.862825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.016 [2024-11-20 17:00:42.862836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21304 len:8 PRP1 0x0 PRP2 0x0 00:15:31.016 [2024-11-20 17:00:42.862864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.016 [2024-11-20 17:00:42.862879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.016 [2024-11-20 17:00:42.862889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.016 [2024-11-20 17:00:42.862899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:8 PRP1 0x0 PRP2 0x0 00:15:31.016 [2024-11-20 17:00:42.862912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.016 [2024-11-20 17:00:42.862925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.016 [2024-11-20 17:00:42.862935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.016 [2024-11-20 17:00:42.862945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21320 len:8 PRP1 0x0 PRP2 0x0 00:15:31.016 [2024-11-20 17:00:42.862958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.016 [2024-11-20 17:00:42.862970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.016 [2024-11-20 17:00:42.862980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.016 [2024-11-20 17:00:42.862990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21328 len:8 PRP1 0x0 PRP2 0x0 00:15:31.016 [2024-11-20 17:00:42.863002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.016 [2024-11-20 17:00:42.863016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.016 [2024-11-20 17:00:42.863026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.016 [2024-11-20 17:00:42.863036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21336 len:8 PRP1 0x0 PRP2 0x0 00:15:31.016 [2024-11-20 17:00:42.863049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.016 [2024-11-20 17:00:42.863062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.016 [2024-11-20 17:00:42.863071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.016 [2024-11-20 17:00:42.863081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:8 PRP1 0x0 PRP2 0x0 00:15:31.016 [2024-11-20 17:00:42.863093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.016 [2024-11-20 17:00:42.863107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:31.016 [2024-11-20 17:00:42.863116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:31.016 [2024-11-20 17:00:42.863126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21352 len:8 PRP1 0x0 PRP2 0x0 00:15:31.016 [2024-11-20 17:00:42.863144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.016 [2024-11-20 17:00:42.863205] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:15:31.016 [2024-11-20 17:00:42.863226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:15:31.016 [2024-11-20 17:00:42.867054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:15:31.016 [2024-11-20 17:00:42.867107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa18710 (9): Bad file descriptor 00:15:31.016 [2024-11-20 17:00:42.894716] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:15:31.016 8600.70 IOPS, 33.60 MiB/s [2024-11-20T17:00:49.552Z] 8657.36 IOPS, 33.82 MiB/s [2024-11-20T17:00:49.552Z] 8667.92 IOPS, 33.86 MiB/s [2024-11-20T17:00:49.552Z] 8675.00 IOPS, 33.89 MiB/s [2024-11-20T17:00:49.552Z] 8682.21 IOPS, 33.91 MiB/s [2024-11-20T17:00:49.552Z] 8689.00 IOPS, 33.94 MiB/s 00:15:31.016 Latency(us) 00:15:31.016 [2024-11-20T17:00:49.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.016 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:31.016 Verification LBA range: start 0x0 length 0x4000 00:15:31.016 NVMe0n1 : 15.01 8689.92 33.95 223.11 0.00 14327.73 629.29 16086.11 00:15:31.016 [2024-11-20T17:00:49.552Z] =================================================================================================================== 00:15:31.016 [2024-11-20T17:00:49.552Z] Total : 8689.92 33.95 223.11 0.00 14327.73 629.29 16086.11 00:15:31.016 Received shutdown signal, test time was about 15.000000 seconds 00:15:31.016 00:15:31.016 Latency(us) 00:15:31.016 [2024-11-20T17:00:49.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.016 [2024-11-20T17:00:49.552Z] =================================================================================================================== 00:15:31.016 [2024-11-20T17:00:49.552Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:31.016 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:31.016 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:31.016 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:31.016 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75553 00:15:31.016 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75553 /var/tmp/bdevperf.sock 00:15:31.016 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75553 ']' 00:15:31.016 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:31.016 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:31.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:31.016 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.016 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:31.016 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.016 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:31.016 17:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:31.016 17:00:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:15:31.016 17:00:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:31.017 [2024-11-20 17:00:49.246790] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:31.017 17:00:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:31.017 [2024-11-20 17:00:49.502995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:31.017 17:00:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:31.584 NVMe0n1 00:15:31.584 17:00:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:31.843 00:15:31.843 17:00:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:32.102 00:15:32.102 17:00:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:32.102 17:00:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:32.360 17:00:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:32.619 17:00:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:35.969 17:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:35.970 17:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:35.970 17:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75622 00:15:35.970 17:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:35.970 17:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75622 00:15:37.349 { 00:15:37.349 "results": [ 00:15:37.349 { 00:15:37.349 "job": "NVMe0n1", 00:15:37.349 "core_mask": "0x1", 00:15:37.349 "workload": "verify", 00:15:37.349 "status": "finished", 00:15:37.349 "verify_range": { 00:15:37.349 "start": 0, 00:15:37.349 "length": 16384 00:15:37.349 }, 00:15:37.349 "queue_depth": 128, 00:15:37.349 "io_size": 4096, 00:15:37.349 "runtime": 1.007251, 00:15:37.349 "iops": 6543.552699376819, 00:15:37.349 "mibps": 25.5607527319407, 00:15:37.349 "io_failed": 0, 00:15:37.349 "io_timeout": 0, 00:15:37.349 "avg_latency_us": 19446.178209955724, 00:15:37.349 "min_latency_us": 2234.181818181818, 00:15:37.349 "max_latency_us": 16443.578181818182 00:15:37.349 } 00:15:37.349 ], 00:15:37.349 "core_count": 1 00:15:37.349 } 00:15:37.349 17:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:37.349 [2024-11-20 17:00:48.646653] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:15:37.349 [2024-11-20 17:00:48.646752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75553 ] 00:15:37.349 [2024-11-20 17:00:48.796652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.349 [2024-11-20 17:00:48.865499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.349 [2024-11-20 17:00:48.925346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:37.349 [2024-11-20 17:00:51.068503] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:37.349 [2024-11-20 17:00:51.068743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:37.349 [2024-11-20 17:00:51.068776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:37.349 [2024-11-20 17:00:51.068804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:37.349 [2024-11-20 17:00:51.068822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:37.349 [2024-11-20 17:00:51.068840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:37.349 [2024-11-20 17:00:51.068871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:37.349 [2024-11-20 17:00:51.068894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:37.349 [2024-11-20 17:00:51.068911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:37.349 [2024-11-20 17:00:51.068930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:15:37.349 [2024-11-20 17:00:51.068991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:15:37.349 [2024-11-20 17:00:51.069046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5f710 (9): Bad file descriptor 00:15:37.349 [2024-11-20 17:00:51.078913] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:15:37.349 Running I/O for 1 seconds... 00:15:37.349 6463.00 IOPS, 25.25 MiB/s 00:15:37.349 Latency(us) 00:15:37.349 [2024-11-20T17:00:55.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.350 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:37.350 Verification LBA range: start 0x0 length 0x4000 00:15:37.350 NVMe0n1 : 1.01 6543.55 25.56 0.00 0.00 19446.18 2234.18 16443.58 00:15:37.350 [2024-11-20T17:00:55.886Z] =================================================================================================================== 00:15:37.350 [2024-11-20T17:00:55.886Z] Total : 6543.55 25.56 0.00 0.00 19446.18 2234.18 16443.58 00:15:37.350 17:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:37.350 17:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:37.609 17:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:37.869 17:00:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:37.869 17:00:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:38.129 17:00:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:38.388 17:00:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:41.678 17:00:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:41.678 17:00:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:41.678 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75553 00:15:41.678 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75553 ']' 00:15:41.678 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75553 00:15:41.678 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:15:41.678 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.678 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75553 00:15:41.678 killing process with pid 75553 00:15:41.678 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:41.678 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:41.678 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75553' 00:15:41.678 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75553 00:15:41.678 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75553 00:15:41.937 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:41.937 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:42.196 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:42.196 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:42.196 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:42.196 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:42.196 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:15:42.196 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:42.196 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:15:42.196 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:42.196 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:42.196 rmmod nvme_tcp 00:15:42.196 rmmod nvme_fabrics 00:15:42.456 rmmod nvme_keyring 00:15:42.456 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:42.456 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:15:42.456 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:15:42.456 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75306 ']' 00:15:42.456 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75306 00:15:42.456 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75306 ']' 00:15:42.456 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75306 00:15:42.456 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:15:42.456 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.456 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75306 00:15:42.456 killing process with pid 75306 00:15:42.456 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:42.456 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:42.456 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75306' 00:15:42.456 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75306 00:15:42.456 17:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75306 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:15:42.716 00:15:42.716 real 0m32.296s 00:15:42.716 user 2m4.868s 00:15:42.716 sys 0m5.738s 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.716 17:01:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:42.716 ************************************ 00:15:42.716 END TEST nvmf_failover 00:15:42.716 ************************************ 00:15:42.975 17:01:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:42.975 17:01:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:42.975 17:01:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.975 17:01:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.975 ************************************ 00:15:42.975 START TEST nvmf_host_discovery 00:15:42.975 ************************************ 00:15:42.975 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:42.976 * Looking for test storage... 00:15:42.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:42.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.976 --rc genhtml_branch_coverage=1 00:15:42.976 --rc genhtml_function_coverage=1 00:15:42.976 --rc genhtml_legend=1 00:15:42.976 --rc geninfo_all_blocks=1 00:15:42.976 --rc geninfo_unexecuted_blocks=1 00:15:42.976 00:15:42.976 ' 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:42.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.976 --rc genhtml_branch_coverage=1 00:15:42.976 --rc genhtml_function_coverage=1 00:15:42.976 --rc genhtml_legend=1 00:15:42.976 --rc geninfo_all_blocks=1 00:15:42.976 --rc geninfo_unexecuted_blocks=1 00:15:42.976 00:15:42.976 ' 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:42.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.976 --rc genhtml_branch_coverage=1 00:15:42.976 --rc genhtml_function_coverage=1 00:15:42.976 --rc genhtml_legend=1 00:15:42.976 --rc geninfo_all_blocks=1 00:15:42.976 --rc geninfo_unexecuted_blocks=1 00:15:42.976 00:15:42.976 ' 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:42.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.976 --rc genhtml_branch_coverage=1 00:15:42.976 --rc genhtml_function_coverage=1 00:15:42.976 --rc genhtml_legend=1 00:15:42.976 --rc geninfo_all_blocks=1 00:15:42.976 --rc geninfo_unexecuted_blocks=1 00:15:42.976 00:15:42.976 ' 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:42.976 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:42.976 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:42.977 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:43.235 Cannot find device "nvmf_init_br" 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:43.235 Cannot find device "nvmf_init_br2" 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:43.235 Cannot find device "nvmf_tgt_br" 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:43.235 Cannot find device "nvmf_tgt_br2" 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:43.235 Cannot find device "nvmf_init_br" 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:43.235 Cannot find device "nvmf_init_br2" 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:43.235 Cannot find device "nvmf_tgt_br" 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:43.235 Cannot find device "nvmf_tgt_br2" 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:43.235 Cannot find device "nvmf_br" 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:43.235 Cannot find device "nvmf_init_if" 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:43.235 Cannot find device "nvmf_init_if2" 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:43.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:43.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:43.235 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:43.236 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:43.236 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:43.236 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:43.236 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:43.236 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:43.236 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:43.236 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:43.236 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:43.236 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:43.236 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:43.236 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:43.236 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:43.236 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:43.236 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:43.236 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:43.495 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:43.495 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:15:43.495 00:15:43.495 --- 10.0.0.3 ping statistics --- 00:15:43.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.495 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:43.495 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:43.495 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:15:43.495 00:15:43.495 --- 10.0.0.4 ping statistics --- 00:15:43.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.495 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:43.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:43.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:15:43.495 00:15:43.495 --- 10.0.0.1 ping statistics --- 00:15:43.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.495 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:43.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:43.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:15:43.495 00:15:43.495 --- 10.0.0.2 ping statistics --- 00:15:43.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.495 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:43.495 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.496 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75951 00:15:43.496 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75951 00:15:43.496 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:43.496 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75951 ']' 00:15:43.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.496 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.496 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:43.496 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.496 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:43.496 17:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.496 [2024-11-20 17:01:01.956218] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:15:43.496 [2024-11-20 17:01:01.956318] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.754 [2024-11-20 17:01:02.116097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.754 [2024-11-20 17:01:02.187167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.754 [2024-11-20 17:01:02.187467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.754 [2024-11-20 17:01:02.187503] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.754 [2024-11-20 17:01:02.187514] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.754 [2024-11-20 17:01:02.187525] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.754 [2024-11-20 17:01:02.188076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.754 [2024-11-20 17:01:02.248228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:44.691 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:44.691 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:15:44.691 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:44.691 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:44.691 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.691 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.691 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:44.691 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.691 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.691 [2024-11-20 17:01:03.078482] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:44.691 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.691 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:15:44.691 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.691 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.692 [2024-11-20 17:01:03.090571] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:15:44.692 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.692 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:44.692 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.692 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.692 null0 00:15:44.692 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.692 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:44.692 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.692 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.692 null1 00:15:44.692 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.692 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:44.692 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.692 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.692 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.692 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75983 00:15:44.692 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:44.692 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75983 /tmp/host.sock 00:15:44.692 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75983 ']' 00:15:44.692 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:15:44.692 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.692 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:44.692 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:44.692 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.692 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.692 [2024-11-20 17:01:03.181758] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:15:44.692 [2024-11-20 17:01:03.182116] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75983 ] 00:15:44.956 [2024-11-20 17:01:03.330405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.956 [2024-11-20 17:01:03.391232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.956 [2024-11-20 17:01:03.447826] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:45.215 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.476 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:15:45.476 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:45.476 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.476 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.476 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.476 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:15:45.476 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:45.476 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.476 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:45.476 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:45.476 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.476 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:45.476 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.476 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:15:45.476 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:15:45.476 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:45.476 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:45.476 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:45.476 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.476 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.476 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:45.476 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.477 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:45.477 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:45.477 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.477 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.477 [2024-11-20 17:01:03.898889] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:45.477 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.477 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:15:45.477 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:45.477 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.477 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:45.477 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.477 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:45.477 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:45.477 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.477 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:15:45.477 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:15:45.477 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:45.477 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:45.477 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:45.477 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.477 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:45.477 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.477 17:01:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:15:45.739 17:01:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:15:46.307 [2024-11-20 17:01:04.543992] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:46.307 [2024-11-20 17:01:04.544025] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:46.307 [2024-11-20 17:01:04.544048] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:46.307 [2024-11-20 17:01:04.550032] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:15:46.307 [2024-11-20 17:01:04.604425] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:15:46.307 [2024-11-20 17:01:04.605411] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x86ee60:1 started. 00:15:46.307 [2024-11-20 17:01:04.607536] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:46.307 [2024-11-20 17:01:04.607707] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:46.307 [2024-11-20 17:01:04.612581] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x86ee60 was disconnected and freed. delete nvme_qpair. 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:46.875 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:46.876 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:46.876 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.876 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:46.876 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.876 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:46.876 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:15:46.876 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:46.876 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:46.876 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:46.876 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.876 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:46.876 [2024-11-20 17:01:05.376308] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x8474a0:1 started. 00:15:46.876 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.876 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:46.876 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:46.876 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:46.876 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:46.876 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:46.876 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:46.876 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:46.876 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.876 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:46.876 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:46.876 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:46.876 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:46.876 [2024-11-20 17:01:05.382893] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x8474a0 was disconnected and freed. delete nvme_qpair. 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:47.136 [2024-11-20 17:01:05.496497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:47.136 [2024-11-20 17:01:05.497663] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:15:47.136 [2024-11-20 17:01:05.497699] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:47.136 [2024-11-20 17:01:05.503666] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:47.136 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:47.136 [2024-11-20 17:01:05.567090] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:15:47.137 [2024-11-20 17:01:05.567142] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:47.137 [2024-11-20 17:01:05.567153] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:47.137 [2024-11-20 17:01:05.567159] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:47.137 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.137 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:47.137 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:47.137 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:47.137 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:47.137 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:47.137 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:47.137 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:15:47.137 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:15:47.137 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:47.137 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.137 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:47.137 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:47.137 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:47.137 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:47.137 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:47.397 [2024-11-20 17:01:05.737312] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:15:47.397 [2024-11-20 17:01:05.737352] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:47.397 [2024-11-20 17:01:05.743328] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:15:47.397 [2024-11-20 17:01:05.743357] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:47.397 [2024-11-20 17:01:05.743459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.397 [2024-11-20 17:01:05.743511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.397 [2024-11-20 17:01:05.743526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.397 [2024-11-20 17:01:05.743535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.397 [2024-11-20 17:01:05.743545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.397 [2024-11-20 17:01:05.743555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.397 [2024-11-20 17:01:05.743565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.397 [2024-11-20 17:01:05.743574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.397 [2024-11-20 17:01:05.743583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x84b230 is same with the state(6) to be set 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:47.397 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:47.398 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:47.398 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:47.398 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:47.398 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:47.398 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:47.398 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:47.398 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:47.398 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.398 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:47.398 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.657 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:47.657 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:47.657 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:47.657 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:47.657 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:47.657 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.657 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:47.657 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.657 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:15:47.657 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:15:47.657 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:47.657 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:47.657 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:15:47.657 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:47.657 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:47.657 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.657 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:47.657 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:47.657 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:47.657 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:47.657 17:01:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.657 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:15:47.657 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:47.657 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:15:47.657 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:15:47.657 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:47.657 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:47.657 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:15:47.657 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:47.657 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:47.657 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:47.657 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.657 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:47.657 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:47.657 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:47.657 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.657 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:15:47.657 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:47.657 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:15:47.658 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:15:47.658 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:47.658 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:47.658 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:47.658 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:47.658 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:47.658 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:47.658 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:47.658 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.658 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:47.658 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:47.658 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.658 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:15:47.658 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:15:47.658 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:47.658 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:47.658 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:47.658 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.658 17:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:49.037 [2024-11-20 17:01:07.154896] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:49.037 [2024-11-20 17:01:07.154932] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:49.037 [2024-11-20 17:01:07.154954] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:49.037 [2024-11-20 17:01:07.160949] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:15:49.037 [2024-11-20 17:01:07.219498] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:15:49.037 [2024-11-20 17:01:07.220318] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x870370:1 started. 00:15:49.037 [2024-11-20 17:01:07.221977] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:49.037 [2024-11-20 17:01:07.222017] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:49.037 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.037 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:49.037 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:15:49.037 [2024-11-20 17:01:07.223767] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x870370 was disconnected and freed. delete nvme_qpair. 00:15:49.037 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:49.037 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:49.037 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:49.037 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:49.038 request: 00:15:49.038 { 00:15:49.038 "name": "nvme", 00:15:49.038 "trtype": "tcp", 00:15:49.038 "traddr": "10.0.0.3", 00:15:49.038 "adrfam": "ipv4", 00:15:49.038 "trsvcid": "8009", 00:15:49.038 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:49.038 "wait_for_attach": true, 00:15:49.038 "method": "bdev_nvme_start_discovery", 00:15:49.038 "req_id": 1 00:15:49.038 } 00:15:49.038 Got JSON-RPC error response 00:15:49.038 response: 00:15:49.038 { 00:15:49.038 "code": -17, 00:15:49.038 "message": "File exists" 00:15:49.038 } 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:49.038 request: 00:15:49.038 { 00:15:49.038 "name": "nvme_second", 00:15:49.038 "trtype": "tcp", 00:15:49.038 "traddr": "10.0.0.3", 00:15:49.038 "adrfam": "ipv4", 00:15:49.038 "trsvcid": "8009", 00:15:49.038 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:49.038 "wait_for_attach": true, 00:15:49.038 "method": "bdev_nvme_start_discovery", 00:15:49.038 "req_id": 1 00:15:49.038 } 00:15:49.038 Got JSON-RPC error response 00:15:49.038 response: 00:15:49.038 { 00:15:49.038 "code": -17, 00:15:49.038 "message": "File exists" 00:15:49.038 } 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.038 17:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:49.976 [2024-11-20 17:01:08.490444] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:49.976 [2024-11-20 17:01:08.490540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87c030 with addr=10.0.0.3, port=8010 00:15:49.976 [2024-11-20 17:01:08.490568] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:49.976 [2024-11-20 17:01:08.490579] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:49.976 [2024-11-20 17:01:08.490589] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:15:51.358 [2024-11-20 17:01:09.490424] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:51.358 [2024-11-20 17:01:09.490489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87c030 with addr=10.0.0.3, port=8010 00:15:51.358 [2024-11-20 17:01:09.490516] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:51.358 [2024-11-20 17:01:09.490527] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:51.358 [2024-11-20 17:01:09.490537] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:15:52.294 [2024-11-20 17:01:10.490260] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:15:52.294 request: 00:15:52.294 { 00:15:52.294 "name": "nvme_second", 00:15:52.294 "trtype": "tcp", 00:15:52.294 "traddr": "10.0.0.3", 00:15:52.294 "adrfam": "ipv4", 00:15:52.294 "trsvcid": "8010", 00:15:52.294 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:52.294 "wait_for_attach": false, 00:15:52.294 "attach_timeout_ms": 3000, 00:15:52.294 "method": "bdev_nvme_start_discovery", 00:15:52.294 "req_id": 1 00:15:52.294 } 00:15:52.294 Got JSON-RPC error response 00:15:52.294 response: 00:15:52.294 { 00:15:52.294 "code": -110, 00:15:52.295 "message": "Connection timed out" 00:15:52.295 } 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75983 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:52.295 rmmod nvme_tcp 00:15:52.295 rmmod nvme_fabrics 00:15:52.295 rmmod nvme_keyring 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75951 ']' 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75951 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 75951 ']' 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 75951 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75951 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:52.295 killing process with pid 75951 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75951' 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 75951 00:15:52.295 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 75951 00:15:52.554 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:52.554 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:52.554 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:52.554 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:15:52.554 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:15:52.554 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:15:52.554 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:52.554 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:52.554 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:52.554 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:52.554 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:52.554 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:52.554 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:52.554 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:52.554 17:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:52.554 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:52.554 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:52.554 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:52.554 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:52.554 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:52.554 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:52.813 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:52.813 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:52.813 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.813 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.813 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.813 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:15:52.813 00:15:52.813 real 0m9.894s 00:15:52.813 user 0m18.301s 00:15:52.813 sys 0m2.055s 00:15:52.813 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:52.813 ************************************ 00:15:52.813 END TEST nvmf_host_discovery 00:15:52.813 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.813 ************************************ 00:15:52.813 17:01:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:52.813 17:01:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:52.813 17:01:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:52.813 17:01:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.813 ************************************ 00:15:52.813 START TEST nvmf_host_multipath_status 00:15:52.813 ************************************ 00:15:52.813 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:52.813 * Looking for test storage... 00:15:52.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:52.813 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:52.813 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:52.813 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:53.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.072 --rc genhtml_branch_coverage=1 00:15:53.072 --rc genhtml_function_coverage=1 00:15:53.072 --rc genhtml_legend=1 00:15:53.072 --rc geninfo_all_blocks=1 00:15:53.072 --rc geninfo_unexecuted_blocks=1 00:15:53.072 00:15:53.072 ' 00:15:53.072 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:53.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.073 --rc genhtml_branch_coverage=1 00:15:53.073 --rc genhtml_function_coverage=1 00:15:53.073 --rc genhtml_legend=1 00:15:53.073 --rc geninfo_all_blocks=1 00:15:53.073 --rc geninfo_unexecuted_blocks=1 00:15:53.073 00:15:53.073 ' 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:53.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.073 --rc genhtml_branch_coverage=1 00:15:53.073 --rc genhtml_function_coverage=1 00:15:53.073 --rc genhtml_legend=1 00:15:53.073 --rc geninfo_all_blocks=1 00:15:53.073 --rc geninfo_unexecuted_blocks=1 00:15:53.073 00:15:53.073 ' 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:53.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.073 --rc genhtml_branch_coverage=1 00:15:53.073 --rc genhtml_function_coverage=1 00:15:53.073 --rc genhtml_legend=1 00:15:53.073 --rc geninfo_all_blocks=1 00:15:53.073 --rc geninfo_unexecuted_blocks=1 00:15:53.073 00:15:53.073 ' 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:53.073 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:53.073 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:53.074 Cannot find device "nvmf_init_br" 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:53.074 Cannot find device "nvmf_init_br2" 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:53.074 Cannot find device "nvmf_tgt_br" 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:53.074 Cannot find device "nvmf_tgt_br2" 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:53.074 Cannot find device "nvmf_init_br" 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:53.074 Cannot find device "nvmf_init_br2" 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:53.074 Cannot find device "nvmf_tgt_br" 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:53.074 Cannot find device "nvmf_tgt_br2" 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:53.074 Cannot find device "nvmf_br" 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:53.074 Cannot find device "nvmf_init_if" 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:53.074 Cannot find device "nvmf_init_if2" 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:53.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:53.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:53.074 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:53.332 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:53.332 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:53.332 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:53.332 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:53.332 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:53.332 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:53.332 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:53.332 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:53.332 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:53.332 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:53.332 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:53.333 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:53.333 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:15:53.333 00:15:53.333 --- 10.0.0.3 ping statistics --- 00:15:53.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.333 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:53.333 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:53.333 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:15:53.333 00:15:53.333 --- 10.0.0.4 ping statistics --- 00:15:53.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.333 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:53.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:53.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:53.333 00:15:53.333 --- 10.0.0.1 ping statistics --- 00:15:53.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.333 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:53.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:53.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:15:53.333 00:15:53.333 --- 10.0.0.2 ping statistics --- 00:15:53.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.333 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:53.333 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:53.591 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:15:53.591 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:53.591 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:53.591 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:53.591 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76482 00:15:53.591 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:53.591 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76482 00:15:53.591 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76482 ']' 00:15:53.591 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.591 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:53.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.591 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.591 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:53.591 17:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:53.591 [2024-11-20 17:01:11.938168] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:15:53.591 [2024-11-20 17:01:11.938258] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.591 [2024-11-20 17:01:12.091799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:53.851 [2024-11-20 17:01:12.151032] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.851 [2024-11-20 17:01:12.151095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.851 [2024-11-20 17:01:12.151109] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:53.851 [2024-11-20 17:01:12.151120] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:53.851 [2024-11-20 17:01:12.151129] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.851 [2024-11-20 17:01:12.152345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.851 [2024-11-20 17:01:12.152358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.851 [2024-11-20 17:01:12.210085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:53.851 17:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:53.851 17:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:15:53.851 17:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:53.851 17:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:53.851 17:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:53.851 17:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.851 17:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76482 00:15:53.851 17:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:54.110 [2024-11-20 17:01:12.563824] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:54.110 17:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:54.677 Malloc0 00:15:54.677 17:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:15:54.936 17:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:55.195 17:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:55.455 [2024-11-20 17:01:13.858472] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:55.455 17:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:55.714 [2024-11-20 17:01:14.178690] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:55.714 17:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76537 00:15:55.714 17:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:55.714 17:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:55.714 17:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76537 /var/tmp/bdevperf.sock 00:15:55.714 17:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76537 ']' 00:15:55.714 17:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:55.714 17:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:55.714 17:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:55.714 17:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.714 17:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:56.282 17:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.282 17:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:15:56.282 17:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:56.541 17:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:56.801 Nvme0n1 00:15:56.801 17:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:57.060 Nvme0n1 00:15:57.319 17:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:15:57.319 17:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:15:59.240 17:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:15:59.240 17:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:59.498 17:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:59.757 17:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:00.694 17:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:00.694 17:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:00.694 17:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:00.694 17:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.261 17:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.261 17:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:01.261 17:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:01.261 17:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.520 17:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:01.520 17:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:01.520 17:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:01.520 17:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.779 17:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.779 17:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:01.779 17:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.779 17:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:02.038 17:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:02.038 17:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:02.038 17:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:02.038 17:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:02.297 17:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:02.297 17:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:02.297 17:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:02.297 17:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:02.556 17:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:02.556 17:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:02.556 17:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:03.122 17:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:03.122 17:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:04.498 17:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:04.498 17:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:04.498 17:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.498 17:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:04.498 17:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:04.498 17:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:04.498 17:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.498 17:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:04.756 17:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.756 17:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:04.756 17:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:04.756 17:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:05.015 17:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:05.015 17:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:05.015 17:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:05.015 17:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:05.274 17:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:05.275 17:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:05.275 17:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:05.275 17:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:05.533 17:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:05.533 17:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:05.533 17:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:05.533 17:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:06.099 17:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:06.099 17:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:06.100 17:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:06.100 17:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:06.667 17:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:07.638 17:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:07.638 17:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:07.638 17:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.638 17:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:07.897 17:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:07.897 17:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:07.897 17:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.897 17:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:08.156 17:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:08.156 17:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:08.156 17:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.156 17:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:08.414 17:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:08.414 17:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:08.414 17:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.414 17:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:08.674 17:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:08.674 17:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:08.674 17:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:08.674 17:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:09.242 17:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:09.242 17:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:09.242 17:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:09.242 17:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:09.501 17:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:09.501 17:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:09.501 17:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:09.759 17:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:10.019 17:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:11.395 17:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:11.395 17:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:11.395 17:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:11.395 17:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.395 17:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.395 17:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:11.395 17:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.395 17:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:11.655 17:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:11.655 17:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:11.655 17:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.655 17:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:11.915 17:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.915 17:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:11.915 17:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:11.915 17:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:12.173 17:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:12.173 17:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:12.173 17:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:12.173 17:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:12.432 17:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:12.432 17:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:12.432 17:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:12.432 17:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:12.691 17:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:12.691 17:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:12.691 17:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:12.950 17:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:13.209 17:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:14.146 17:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:14.146 17:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:14.405 17:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.405 17:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:14.405 17:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:14.405 17:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:14.405 17:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:14.405 17:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.974 17:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:14.974 17:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:14.974 17:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.974 17:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:14.974 17:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.974 17:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:14.974 17:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.974 17:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:15.541 17:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:15.541 17:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:15.541 17:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:15.541 17:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:15.800 17:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:15.800 17:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:15.800 17:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:15.800 17:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:15.800 17:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:15.800 17:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:15.800 17:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:16.079 17:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:16.380 17:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:17.318 17:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:17.318 17:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:17.318 17:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.318 17:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:17.884 17:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:17.884 17:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:17.884 17:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:17.884 17:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.143 17:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.143 17:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:18.143 17:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:18.143 17:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.401 17:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.401 17:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:18.401 17:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:18.401 17:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.660 17:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.660 17:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:18.660 17:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:18.660 17:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.919 17:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:18.919 17:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:18.919 17:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.919 17:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:19.178 17:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:19.178 17:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:19.436 17:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:19.436 17:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:19.695 17:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:20.262 17:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:21.194 17:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:21.194 17:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:21.194 17:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.194 17:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:21.453 17:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.453 17:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:21.453 17:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:21.453 17:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.712 17:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.712 17:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:21.712 17:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.712 17:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:21.971 17:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.971 17:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:21.971 17:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.971 17:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:22.229 17:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.229 17:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:22.229 17:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.229 17:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:22.488 17:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.488 17:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:22.488 17:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.488 17:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:22.747 17:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.747 17:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:22.747 17:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:23.006 17:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:23.267 17:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:24.203 17:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:24.203 17:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:24.203 17:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.203 17:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:24.771 17:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:24.771 17:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:24.771 17:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:24.771 17:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.030 17:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:25.030 17:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:25.030 17:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.030 17:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:25.289 17:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:25.289 17:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:25.289 17:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:25.289 17:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.548 17:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:25.548 17:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:25.548 17:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.548 17:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:25.807 17:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:25.807 17:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:25.807 17:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.807 17:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:26.376 17:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:26.376 17:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:26.376 17:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:26.637 17:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:26.896 17:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:27.833 17:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:27.833 17:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:27.833 17:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.833 17:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:28.093 17:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:28.093 17:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:28.093 17:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:28.093 17:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.661 17:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:28.661 17:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:28.661 17:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.661 17:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:28.921 17:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:28.921 17:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:28.921 17:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:28.921 17:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:29.179 17:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:29.179 17:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:29.179 17:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:29.179 17:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:29.438 17:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:29.438 17:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:29.438 17:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:29.438 17:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:29.698 17:01:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:29.698 17:01:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:29.698 17:01:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:29.956 17:01:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:30.215 17:01:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:31.153 17:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:31.153 17:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:31.153 17:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.153 17:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:31.411 17:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:31.411 17:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:31.411 17:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:31.411 17:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.978 17:01:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:31.978 17:01:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:31.978 17:01:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:31.978 17:01:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.236 17:01:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:32.236 17:01:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:32.236 17:01:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:32.236 17:01:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.493 17:01:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:32.493 17:01:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:32.493 17:01:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:32.493 17:01:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.751 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:32.751 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:32.751 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.751 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:33.010 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:33.010 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76537 00:16:33.010 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76537 ']' 00:16:33.010 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76537 00:16:33.010 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:16:33.010 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:33.010 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76537 00:16:33.010 killing process with pid 76537 00:16:33.010 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:33.010 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:33.010 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76537' 00:16:33.010 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76537 00:16:33.010 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76537 00:16:33.010 { 00:16:33.010 "results": [ 00:16:33.010 { 00:16:33.010 "job": "Nvme0n1", 00:16:33.010 "core_mask": "0x4", 00:16:33.010 "workload": "verify", 00:16:33.010 "status": "terminated", 00:16:33.010 "verify_range": { 00:16:33.010 "start": 0, 00:16:33.010 "length": 16384 00:16:33.010 }, 00:16:33.010 "queue_depth": 128, 00:16:33.010 "io_size": 4096, 00:16:33.010 "runtime": 35.753029, 00:16:33.010 "iops": 8466.275682544268, 00:16:33.010 "mibps": 33.07138938493855, 00:16:33.010 "io_failed": 0, 00:16:33.010 "io_timeout": 0, 00:16:33.010 "avg_latency_us": 15091.788316123791, 00:16:33.010 "min_latency_us": 580.8872727272727, 00:16:33.010 "max_latency_us": 4087539.898181818 00:16:33.010 } 00:16:33.010 ], 00:16:33.010 "core_count": 1 00:16:33.010 } 00:16:33.278 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76537 00:16:33.278 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:33.278 [2024-11-20 17:01:14.244382] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:16:33.278 [2024-11-20 17:01:14.244508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76537 ] 00:16:33.278 [2024-11-20 17:01:14.392934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.278 [2024-11-20 17:01:14.448527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.278 [2024-11-20 17:01:14.504575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:33.278 Running I/O for 90 seconds... 00:16:33.278 7061.00 IOPS, 27.58 MiB/s [2024-11-20T17:01:51.814Z] 6930.00 IOPS, 27.07 MiB/s [2024-11-20T17:01:51.814Z] 6876.33 IOPS, 26.86 MiB/s [2024-11-20T17:01:51.814Z] 6853.25 IOPS, 26.77 MiB/s [2024-11-20T17:01:51.814Z] 6839.20 IOPS, 26.72 MiB/s [2024-11-20T17:01:51.814Z] 7027.00 IOPS, 27.45 MiB/s [2024-11-20T17:01:51.814Z] 7434.57 IOPS, 29.04 MiB/s [2024-11-20T17:01:51.814Z] 7697.75 IOPS, 30.07 MiB/s [2024-11-20T17:01:51.814Z] 7834.00 IOPS, 30.60 MiB/s [2024-11-20T17:01:51.814Z] 7952.10 IOPS, 31.06 MiB/s [2024-11-20T17:01:51.814Z] 8065.55 IOPS, 31.51 MiB/s [2024-11-20T17:01:51.814Z] 8116.67 IOPS, 31.71 MiB/s [2024-11-20T17:01:51.814Z] 8177.15 IOPS, 31.94 MiB/s [2024-11-20T17:01:51.814Z] 8295.64 IOPS, 32.40 MiB/s [2024-11-20T17:01:51.814Z] 8387.40 IOPS, 32.76 MiB/s [2024-11-20T17:01:51.814Z] [2024-11-20 17:01:31.370834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.278 [2024-11-20 17:01:31.370920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:33.278 [2024-11-20 17:01:31.370958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.278 [2024-11-20 17:01:31.370976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.370999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.371014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.371037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.371052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.371074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.371089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.371142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.371167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.371188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.371203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.371225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.371240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.371262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.371277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.371334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.371352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.371373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.371389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.371410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.371425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.371447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.371462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.371483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.371498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.371520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.371535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.371557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.371572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.371594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.279 [2024-11-20 17:01:31.371610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.371632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.279 [2024-11-20 17:01:31.371648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.371670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.279 [2024-11-20 17:01:31.371685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.371707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.279 [2024-11-20 17:01:31.371723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.371745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.279 [2024-11-20 17:01:31.371760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.371793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.279 [2024-11-20 17:01:31.371812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.371834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.279 [2024-11-20 17:01:31.371850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.371886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.279 [2024-11-20 17:01:31.371921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.371973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.371994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.372016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.372032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.372054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.372069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.372091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.372106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.372128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.372143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.372165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.372181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.372223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.372248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.372270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.372286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.372308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.372323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.372345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.372373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.372397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.372413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.372435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.372451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.372482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.279 [2024-11-20 17:01:31.372497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:33.279 [2024-11-20 17:01:31.372519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.280 [2024-11-20 17:01:31.372535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.372557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.280 [2024-11-20 17:01:31.372573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.372595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.280 [2024-11-20 17:01:31.372610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.372646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.280 [2024-11-20 17:01:31.372661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.372682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.280 [2024-11-20 17:01:31.372697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.372718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.280 [2024-11-20 17:01:31.372733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.372754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.280 [2024-11-20 17:01:31.372785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.372805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.280 [2024-11-20 17:01:31.372820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.372840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.280 [2024-11-20 17:01:31.372893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.372934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.280 [2024-11-20 17:01:31.372950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.372972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.280 [2024-11-20 17:01:31.372988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.373009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.280 [2024-11-20 17:01:31.373025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.373048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.280 [2024-11-20 17:01:31.373063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.373085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.280 [2024-11-20 17:01:31.373100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.373121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.280 [2024-11-20 17:01:31.373136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.373158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.280 [2024-11-20 17:01:31.373173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.373195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.280 [2024-11-20 17:01:31.373210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.373232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.280 [2024-11-20 17:01:31.373247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.373269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.280 [2024-11-20 17:01:31.373284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.373306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.280 [2024-11-20 17:01:31.373321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.373343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.280 [2024-11-20 17:01:31.373358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.373389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.280 [2024-11-20 17:01:31.373405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.373427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.280 [2024-11-20 17:01:31.373442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.373464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.280 [2024-11-20 17:01:31.373480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.373501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.280 [2024-11-20 17:01:31.373517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.373539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.280 [2024-11-20 17:01:31.373554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.373576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.280 [2024-11-20 17:01:31.373607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.373659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.280 [2024-11-20 17:01:31.373693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.373716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.280 [2024-11-20 17:01:31.373748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.373769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.280 [2024-11-20 17:01:31.373783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.373803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.280 [2024-11-20 17:01:31.373834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.373855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.280 [2024-11-20 17:01:31.373870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.373891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.280 [2024-11-20 17:01:31.373906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:33.280 [2024-11-20 17:01:31.373952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.373972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.373995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.374010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.374031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.281 [2024-11-20 17:01:31.374046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.374067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.281 [2024-11-20 17:01:31.374082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.374104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.281 [2024-11-20 17:01:31.374118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.374139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.281 [2024-11-20 17:01:31.374154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.374189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.281 [2024-11-20 17:01:31.374204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.374224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.281 [2024-11-20 17:01:31.374255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.374293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.281 [2024-11-20 17:01:31.374308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.374330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.281 [2024-11-20 17:01:31.374355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.374378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.374393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.374415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.374430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.374460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.374483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.374506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.374522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.374544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.374559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.374581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.374598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.374620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.374635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.374672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.374687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.374708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.374723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.374744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.374758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.374794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.374809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.374829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.374844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.374864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.374878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.374909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.374926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.374947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.374969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.375006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.375027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.375049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.375063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.375084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.375098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.375119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.375133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.375156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.375175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.375196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.375209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.375230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.375244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.375297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.281 [2024-11-20 17:01:31.375312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:33.281 [2024-11-20 17:01:31.375334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.375349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.375370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.282 [2024-11-20 17:01:31.375386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.375408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.282 [2024-11-20 17:01:31.375423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.375444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.282 [2024-11-20 17:01:31.375459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.375489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.282 [2024-11-20 17:01:31.375506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.375528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.282 [2024-11-20 17:01:31.375543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.375565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.282 [2024-11-20 17:01:31.375580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.375602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.282 [2024-11-20 17:01:31.375617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.376929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.282 [2024-11-20 17:01:31.376958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.376985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.377002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.377023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.377038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.377058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.377073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.377093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.377115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.377137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.377152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.377172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.377187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.377208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.377222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.377273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.377322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.377347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.377363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.377385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.377401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.377422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.377438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.377459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.377475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.377497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.377512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.377533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.377549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.377571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.377586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.377612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.377629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.377666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.377696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.377716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.377730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.377749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.377763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.377783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.377810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.377831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.377846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.377866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.377880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.377915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.377931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:33.282 [2024-11-20 17:01:31.377964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.282 [2024-11-20 17:01:31.377982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.378004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.283 [2024-11-20 17:01:31.378018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.378038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.283 [2024-11-20 17:01:31.378052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.378072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.283 [2024-11-20 17:01:31.378086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.378106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.283 [2024-11-20 17:01:31.378119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.378139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.283 [2024-11-20 17:01:31.378153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.378173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.283 [2024-11-20 17:01:31.378187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.378207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.283 [2024-11-20 17:01:31.378221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.378675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.283 [2024-11-20 17:01:31.378715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.378751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.283 [2024-11-20 17:01:31.378768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.378788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.283 [2024-11-20 17:01:31.378802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.378822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.283 [2024-11-20 17:01:31.378836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.378856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.283 [2024-11-20 17:01:31.378874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.378896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.283 [2024-11-20 17:01:31.378923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.378945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.283 [2024-11-20 17:01:31.378959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.378979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.283 [2024-11-20 17:01:31.378993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.379013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.283 [2024-11-20 17:01:31.379027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.379046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.283 [2024-11-20 17:01:31.379060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.379080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.283 [2024-11-20 17:01:31.379094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.379113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.283 [2024-11-20 17:01:31.379127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.379147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.283 [2024-11-20 17:01:31.379160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.379188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.283 [2024-11-20 17:01:31.379204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.379224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.283 [2024-11-20 17:01:31.379243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.379297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.283 [2024-11-20 17:01:31.379312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.379337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.283 [2024-11-20 17:01:31.379368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.379390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.283 [2024-11-20 17:01:31.379405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.379427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.283 [2024-11-20 17:01:31.379442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.379464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.283 [2024-11-20 17:01:31.379479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:33.283 [2024-11-20 17:01:31.379501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.283 [2024-11-20 17:01:31.379517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.379539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.284 [2024-11-20 17:01:31.379554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.379576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.284 [2024-11-20 17:01:31.379591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.379613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.284 [2024-11-20 17:01:31.379628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.379676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.284 [2024-11-20 17:01:31.379712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.379749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.284 [2024-11-20 17:01:31.379773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.379796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.284 [2024-11-20 17:01:31.379811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.379832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.284 [2024-11-20 17:01:31.379847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.379868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.284 [2024-11-20 17:01:31.379882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.379903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.284 [2024-11-20 17:01:31.379930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.379954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.284 [2024-11-20 17:01:31.379974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.379996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.284 [2024-11-20 17:01:31.380011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.380047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.284 [2024-11-20 17:01:31.380061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.380081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.284 [2024-11-20 17:01:31.380095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.380115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.284 [2024-11-20 17:01:31.380130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.380167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.284 [2024-11-20 17:01:31.380181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.380228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.284 [2024-11-20 17:01:31.380246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.380269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.284 [2024-11-20 17:01:31.380295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.380319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.284 [2024-11-20 17:01:31.380335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.380357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.284 [2024-11-20 17:01:31.380372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.380398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.284 [2024-11-20 17:01:31.380415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.380436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.284 [2024-11-20 17:01:31.380452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.380474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.284 [2024-11-20 17:01:31.380489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.380511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.284 [2024-11-20 17:01:31.380525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.380547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.284 [2024-11-20 17:01:31.380568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.380590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.284 [2024-11-20 17:01:31.380610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.380632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.284 [2024-11-20 17:01:31.380652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.380675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.284 [2024-11-20 17:01:31.380690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.380712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.284 [2024-11-20 17:01:31.380727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.380749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.284 [2024-11-20 17:01:31.380764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:33.284 [2024-11-20 17:01:31.380795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.380811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.380833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.380863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.380887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.380903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.380925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.380940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.380962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.380977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.380999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.381014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.381049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.381069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.381092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.285 [2024-11-20 17:01:31.381107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.381129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.285 [2024-11-20 17:01:31.381144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.381166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.285 [2024-11-20 17:01:31.381181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.381203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.285 [2024-11-20 17:01:31.381218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.381240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.285 [2024-11-20 17:01:31.381255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.381276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.285 [2024-11-20 17:01:31.381301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.381324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.285 [2024-11-20 17:01:31.381340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.395533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.285 [2024-11-20 17:01:31.395571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.395597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.395613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.395635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.395665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.395700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.395714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.395734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.395747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.395767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.395781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.395801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.395814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.395851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.395866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.395908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.395925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.395947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.395961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.395982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.396010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.396032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.396046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.396066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.396080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.396101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.396114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.396134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.396163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.396183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.396207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.396264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.396285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.396315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.396335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.396365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.396385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.396414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.285 [2024-11-20 17:01:31.396434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:33.285 [2024-11-20 17:01:31.396463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.396483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.396512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.396532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.396562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.396591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.396645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.396667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.396697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.396717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.396747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.286 [2024-11-20 17:01:31.396767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.396797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.286 [2024-11-20 17:01:31.396817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.396851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.286 [2024-11-20 17:01:31.396889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.396919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.286 [2024-11-20 17:01:31.396940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.396971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.286 [2024-11-20 17:01:31.396992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.397021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.286 [2024-11-20 17:01:31.397041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.397071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.286 [2024-11-20 17:01:31.397092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.397122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.286 [2024-11-20 17:01:31.397143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.397182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.397202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.397232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.397252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.397314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.397335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.397364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.397384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.397414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.397434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.397464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.397484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.397514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.397534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.397564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.397583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.397613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.397643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.397690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.397709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.397739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.397759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.397788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.397808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.397863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.397886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.397915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.397935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.397976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.398006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.398037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.398058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.398088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.398107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.398137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.398167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.398197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.398218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.400451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.400495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.400534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.400557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.400588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.286 [2024-11-20 17:01:31.400631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:33.286 [2024-11-20 17:01:31.400670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.400691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.400720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.400741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.400770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.400791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.400820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.400866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.400898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.400938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.400972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.400994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.401023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.401043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.401073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.401093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.401123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.401143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.401173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.401203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.401234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.287 [2024-11-20 17:01:31.401254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.401305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.287 [2024-11-20 17:01:31.401325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.401355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.287 [2024-11-20 17:01:31.401375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.401405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.287 [2024-11-20 17:01:31.401425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.401455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.287 [2024-11-20 17:01:31.401475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.401505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.287 [2024-11-20 17:01:31.401525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.401556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.287 [2024-11-20 17:01:31.401576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.401617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.287 [2024-11-20 17:01:31.401650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.401689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.401709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.401739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.401759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.401789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.401809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.401874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.401903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.401933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.401954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.401984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.402004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.402034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.402054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.402085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.402105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.402135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.402155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.402185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.402205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.402234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.402254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.402293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.402325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.402357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.402378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.402408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.402428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.402458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.287 [2024-11-20 17:01:31.402479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:33.287 [2024-11-20 17:01:31.402508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.288 [2024-11-20 17:01:31.402529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.402559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.288 [2024-11-20 17:01:31.402579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.402609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.288 [2024-11-20 17:01:31.402639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.402668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.288 [2024-11-20 17:01:31.402689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.402718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.288 [2024-11-20 17:01:31.402739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.402768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.288 [2024-11-20 17:01:31.402788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.402818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.288 [2024-11-20 17:01:31.402855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.402886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.288 [2024-11-20 17:01:31.402907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.402937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.288 [2024-11-20 17:01:31.402969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.403000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.288 [2024-11-20 17:01:31.403021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.403051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.288 [2024-11-20 17:01:31.403071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.403101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.288 [2024-11-20 17:01:31.403122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.403167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.288 [2024-11-20 17:01:31.403193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.403224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.288 [2024-11-20 17:01:31.403245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.403286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.288 [2024-11-20 17:01:31.403306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.403336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.288 [2024-11-20 17:01:31.403356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.403386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.288 [2024-11-20 17:01:31.403406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.403436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.288 [2024-11-20 17:01:31.403456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.403486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.288 [2024-11-20 17:01:31.403506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.403536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.288 [2024-11-20 17:01:31.403556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.403586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.288 [2024-11-20 17:01:31.403606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.403659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.288 [2024-11-20 17:01:31.403681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.403711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.288 [2024-11-20 17:01:31.403731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.403761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.288 [2024-11-20 17:01:31.403782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.403812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.288 [2024-11-20 17:01:31.403848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.403880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.288 [2024-11-20 17:01:31.403900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.403930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.288 [2024-11-20 17:01:31.403950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.403980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.288 [2024-11-20 17:01:31.404000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.404030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.288 [2024-11-20 17:01:31.404050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.404080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.288 [2024-11-20 17:01:31.404100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.404130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.288 [2024-11-20 17:01:31.404150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.404180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.288 [2024-11-20 17:01:31.404212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:33.288 [2024-11-20 17:01:31.404254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.288 [2024-11-20 17:01:31.404275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.404318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.289 [2024-11-20 17:01:31.404341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.404370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.289 [2024-11-20 17:01:31.404391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.404421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.289 [2024-11-20 17:01:31.404442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.404471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.289 [2024-11-20 17:01:31.404491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.404521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.289 [2024-11-20 17:01:31.404548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.404577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.289 [2024-11-20 17:01:31.404604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.404645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.289 [2024-11-20 17:01:31.404666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.404697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.289 [2024-11-20 17:01:31.404717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.404747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.289 [2024-11-20 17:01:31.404767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.404796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.289 [2024-11-20 17:01:31.404817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.404875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.289 [2024-11-20 17:01:31.404898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.404954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.289 [2024-11-20 17:01:31.404981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.405012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.289 [2024-11-20 17:01:31.405047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.405079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.289 [2024-11-20 17:01:31.405100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.405130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.289 [2024-11-20 17:01:31.405151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.405181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.289 [2024-11-20 17:01:31.405214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.405243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.289 [2024-11-20 17:01:31.405275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.405305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.289 [2024-11-20 17:01:31.405326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.405355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.289 [2024-11-20 17:01:31.405376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.405405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.289 [2024-11-20 17:01:31.405425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.405455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.289 [2024-11-20 17:01:31.405475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.405504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.289 [2024-11-20 17:01:31.405525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.405555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.289 [2024-11-20 17:01:31.405576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.405605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.289 [2024-11-20 17:01:31.405637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.405672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.289 [2024-11-20 17:01:31.405701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.405731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.289 [2024-11-20 17:01:31.405752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.405782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.289 [2024-11-20 17:01:31.405802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.405848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.289 [2024-11-20 17:01:31.405873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.405903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.289 [2024-11-20 17:01:31.405924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.405954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.289 [2024-11-20 17:01:31.405974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:33.289 [2024-11-20 17:01:31.406004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.406024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.406054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.406074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.406103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.290 [2024-11-20 17:01:31.406123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.406153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.290 [2024-11-20 17:01:31.406173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.406203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.290 [2024-11-20 17:01:31.406223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.406253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.290 [2024-11-20 17:01:31.406298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.406327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.290 [2024-11-20 17:01:31.406347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.406392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.290 [2024-11-20 17:01:31.406415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.406445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.290 [2024-11-20 17:01:31.406466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.406495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.290 [2024-11-20 17:01:31.406515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.406545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.406565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.406594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.406615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.406651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.406676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.406705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.406725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.406755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.406776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.406805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.406838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.406871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.406892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.406922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.406942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.406972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.406992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.407033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.407055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.407084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.407105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.407134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.407154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.407184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.407204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.407234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.407275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.407314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.407335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.407365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.407385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.407414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.407435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.407465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.407486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.409545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.409576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.409619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.409655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.409676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.409691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.409710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.409737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.409759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.409773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:33.290 [2024-11-20 17:01:31.409793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.290 [2024-11-20 17:01:31.409807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.409826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.291 [2024-11-20 17:01:31.409841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.409861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.291 [2024-11-20 17:01:31.409875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.409909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.291 [2024-11-20 17:01:31.409925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.409945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.291 [2024-11-20 17:01:31.409959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.409979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.291 [2024-11-20 17:01:31.409993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.410012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.291 [2024-11-20 17:01:31.410032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.410053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.291 [2024-11-20 17:01:31.410067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.410087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.291 [2024-11-20 17:01:31.410100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.410120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.291 [2024-11-20 17:01:31.410134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.410154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.291 [2024-11-20 17:01:31.410192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.410214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.291 [2024-11-20 17:01:31.410229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.410249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.291 [2024-11-20 17:01:31.410279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.410300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.291 [2024-11-20 17:01:31.410315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.410336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.291 [2024-11-20 17:01:31.410350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.410372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.291 [2024-11-20 17:01:31.410386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.410407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.291 [2024-11-20 17:01:31.410438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.410460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.291 [2024-11-20 17:01:31.410476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.410514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.291 [2024-11-20 17:01:31.410535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.410557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.291 [2024-11-20 17:01:31.410573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.410609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.291 [2024-11-20 17:01:31.410623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.410659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.291 [2024-11-20 17:01:31.410673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.410693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.291 [2024-11-20 17:01:31.410724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.410768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.291 [2024-11-20 17:01:31.410783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.410804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.291 [2024-11-20 17:01:31.410818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.410838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.291 [2024-11-20 17:01:31.410853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.410873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.291 [2024-11-20 17:01:31.410887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.410913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.291 [2024-11-20 17:01:31.410927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.410962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.291 [2024-11-20 17:01:31.410977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.410997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.291 [2024-11-20 17:01:31.411011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.411032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.291 [2024-11-20 17:01:31.411058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.411078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.291 [2024-11-20 17:01:31.411091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:33.291 [2024-11-20 17:01:31.411111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.292 [2024-11-20 17:01:31.411125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.411145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.292 [2024-11-20 17:01:31.411159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.411179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.292 [2024-11-20 17:01:31.411192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.411212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.292 [2024-11-20 17:01:31.411233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.411254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.292 [2024-11-20 17:01:31.411285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.411322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.292 [2024-11-20 17:01:31.411337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.411359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.292 [2024-11-20 17:01:31.411374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.411395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.292 [2024-11-20 17:01:31.411409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.411431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.292 [2024-11-20 17:01:31.411446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.411467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.292 [2024-11-20 17:01:31.411483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.411508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.292 [2024-11-20 17:01:31.411525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.411547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.292 [2024-11-20 17:01:31.411562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.411583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.292 [2024-11-20 17:01:31.411614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.411667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.292 [2024-11-20 17:01:31.411682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.411704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.292 [2024-11-20 17:01:31.411719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.411740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.292 [2024-11-20 17:01:31.411763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.411786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.292 [2024-11-20 17:01:31.411802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.411823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.292 [2024-11-20 17:01:31.411838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.411860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.292 [2024-11-20 17:01:31.411875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.411897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.292 [2024-11-20 17:01:31.411921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.411945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.292 [2024-11-20 17:01:31.411961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.411982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.292 [2024-11-20 17:01:31.411997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.412018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.292 [2024-11-20 17:01:31.412033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.412055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.292 [2024-11-20 17:01:31.412070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.412099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.292 [2024-11-20 17:01:31.412114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.412136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.292 [2024-11-20 17:01:31.412151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.412186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.292 [2024-11-20 17:01:31.412216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.412241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.292 [2024-11-20 17:01:31.412264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.412295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.292 [2024-11-20 17:01:31.412311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.412333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.292 [2024-11-20 17:01:31.412348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:33.292 [2024-11-20 17:01:31.412370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.292 [2024-11-20 17:01:31.412385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.412407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.293 [2024-11-20 17:01:31.412422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.412444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.293 [2024-11-20 17:01:31.412459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.412481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.293 [2024-11-20 17:01:31.412503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.412526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.293 [2024-11-20 17:01:31.412542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.412563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.293 [2024-11-20 17:01:31.412578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.412601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.293 [2024-11-20 17:01:31.412616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.412637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.293 [2024-11-20 17:01:31.412652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.412674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.293 [2024-11-20 17:01:31.412689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.412711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.293 [2024-11-20 17:01:31.412726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.412756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.293 [2024-11-20 17:01:31.412772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.412799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.293 [2024-11-20 17:01:31.412816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.412856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.293 [2024-11-20 17:01:31.412874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.412896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.293 [2024-11-20 17:01:31.412926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.412947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.293 [2024-11-20 17:01:31.412962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.412983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.293 [2024-11-20 17:01:31.412997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.413018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.293 [2024-11-20 17:01:31.413033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.413054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.293 [2024-11-20 17:01:31.423874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.423925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.293 [2024-11-20 17:01:31.423944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.423965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.293 [2024-11-20 17:01:31.423979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.423998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.293 [2024-11-20 17:01:31.424012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.424031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.293 [2024-11-20 17:01:31.424044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.424062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.293 [2024-11-20 17:01:31.424088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.424109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.293 [2024-11-20 17:01:31.424123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.424142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.293 [2024-11-20 17:01:31.424155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.424174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.293 [2024-11-20 17:01:31.424187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.424234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.293 [2024-11-20 17:01:31.424252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.424274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.293 [2024-11-20 17:01:31.424290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.424311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.293 [2024-11-20 17:01:31.424326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.424347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.293 [2024-11-20 17:01:31.424362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.424383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.293 [2024-11-20 17:01:31.424397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.424418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.293 [2024-11-20 17:01:31.424433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:33.293 [2024-11-20 17:01:31.424454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.293 [2024-11-20 17:01:31.424469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:33.294 [2024-11-20 17:01:31.424490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.294 [2024-11-20 17:01:31.424505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:33.294 [2024-11-20 17:01:31.424543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8784 len:8 SGL DATA B 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:33.294 LOCK OFFSET 0x0 len:0x1000 00:16:33.294 [2024-11-20 17:01:31.424582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:33.294 [2024-11-20 17:01:31.424620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.294 [2024-11-20 17:01:31.424649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:33.294 [2024-11-20 17:01:31.424669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.294 [2024-11-20 17:01:31.424683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:33.294 [2024-11-20 17:01:31.424702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.294 [2024-11-20 17:01:31.424715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:33.294 [2024-11-20 17:01:31.424734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.294 [2024-11-20 17:01:31.424747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:33.294 [2024-11-20 17:01:31.424766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.294 [2024-11-20 17:01:31.424780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:33.294 [2024-11-20 17:01:31.424798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.294 [2024-11-20 17:01:31.424811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:33.294 [2024-11-20 17:01:31.424830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.294 [2024-11-20 17:01:31.424862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:33.294 [2024-11-20 17:01:31.424881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.294 [2024-11-20 17:01:31.424895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:33.294 [2024-11-20 17:01:31.424915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.294 [2024-11-20 17:01:31.424945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:33.294 [2024-11-20 17:01:31.424972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.294 [2024-11-20 17:01:31.424989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:33.294 [2024-11-20 17:01:31.425009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.294 [2024-11-20 17:01:31.425023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:33.294 [2024-11-20 17:01:31.425042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.294 [2024-11-20 17:01:31.425056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:33.294 [2024-11-20 17:01:31.425086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.294 [2024-11-20 17:01:31.425100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.294 [2024-11-20 17:01:31.425120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.294 [2024-11-20 17:01:31.425134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.294 [2024-11-20 17:01:31.425153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.294 [2024-11-20 17:01:31.425182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:33.294 [2024-11-20 17:01:31.425202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.294 [2024-11-20 17:01:31.425215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:33.294 [2024-11-20 17:01:31.425234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.294 [2024-11-20 17:01:31.425247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:33.294 [2024-11-20 17:01:31.425283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.294 [2024-11-20 17:01:31.425297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:33.294 [2024-11-20 17:01:31.425317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.294 [2024-11-20 17:01:31.425331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:33.294 [2024-11-20 17:01:31.425351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.294 [2024-11-20 17:01:31.425364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:33.294 [2024-11-20 17:01:31.425385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.294 [2024-11-20 17:01:31.425398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:33.294 [2024-11-20 17:01:31.425419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.295 [2024-11-20 17:01:31.425433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:31.425453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.295 [2024-11-20 17:01:31.425467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:31.425487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.295 [2024-11-20 17:01:31.425500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:31.425528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.295 [2024-11-20 17:01:31.425544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:31.426009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.295 [2024-11-20 17:01:31.426037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:33.295 8253.69 IOPS, 32.24 MiB/s [2024-11-20T17:01:51.831Z] 7768.18 IOPS, 30.34 MiB/s [2024-11-20T17:01:51.831Z] 7336.61 IOPS, 28.66 MiB/s [2024-11-20T17:01:51.831Z] 6950.47 IOPS, 27.15 MiB/s [2024-11-20T17:01:51.831Z] 6739.70 IOPS, 26.33 MiB/s [2024-11-20T17:01:51.831Z] 6846.95 IOPS, 26.75 MiB/s [2024-11-20T17:01:51.831Z] 6943.36 IOPS, 27.12 MiB/s [2024-11-20T17:01:51.831Z] 7113.78 IOPS, 27.79 MiB/s [2024-11-20T17:01:51.831Z] 7345.71 IOPS, 28.69 MiB/s [2024-11-20T17:01:51.831Z] 7574.24 IOPS, 29.59 MiB/s [2024-11-20T17:01:51.831Z] 7720.38 IOPS, 30.16 MiB/s [2024-11-20T17:01:51.831Z] 7756.81 IOPS, 30.30 MiB/s [2024-11-20T17:01:51.831Z] 7790.07 IOPS, 30.43 MiB/s [2024-11-20T17:01:51.831Z] 7820.48 IOPS, 30.55 MiB/s [2024-11-20T17:01:51.831Z] 7895.93 IOPS, 30.84 MiB/s [2024-11-20T17:01:51.831Z] 8072.77 IOPS, 31.53 MiB/s [2024-11-20T17:01:51.831Z] 8236.06 IOPS, 32.17 MiB/s [2024-11-20T17:01:51.831Z] [2024-11-20 17:01:48.627482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:108080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.295 [2024-11-20 17:01:48.627548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.629600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:108112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.295 [2024-11-20 17:01:48.629645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.629672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.295 [2024-11-20 17:01:48.629687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.629708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.295 [2024-11-20 17:01:48.629722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.629742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.295 [2024-11-20 17:01:48.629756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.629776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.295 [2024-11-20 17:01:48.629790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.629826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.295 [2024-11-20 17:01:48.629851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.629874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.295 [2024-11-20 17:01:48.629887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.629907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.295 [2024-11-20 17:01:48.629949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.629971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:108184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.295 [2024-11-20 17:01:48.630002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.630021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.295 [2024-11-20 17:01:48.630035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.630054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.295 [2024-11-20 17:01:48.630067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.630087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.295 [2024-11-20 17:01:48.630100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.630118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.295 [2024-11-20 17:01:48.630132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.630151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.295 [2024-11-20 17:01:48.630164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.630183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.295 [2024-11-20 17:01:48.630202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.630221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.295 [2024-11-20 17:01:48.630234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.630253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.295 [2024-11-20 17:01:48.630267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.630287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.295 [2024-11-20 17:01:48.630300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.630320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.295 [2024-11-20 17:01:48.630333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.630353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.295 [2024-11-20 17:01:48.630367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.630397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.295 [2024-11-20 17:01:48.630412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.630432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:108680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.295 [2024-11-20 17:01:48.630445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.630465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.295 [2024-11-20 17:01:48.630478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.630498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.295 [2024-11-20 17:01:48.630511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.630530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.295 [2024-11-20 17:01:48.630544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:33.295 [2024-11-20 17:01:48.630563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.295 [2024-11-20 17:01:48.630576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:33.296 [2024-11-20 17:01:48.630596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.296 [2024-11-20 17:01:48.630632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:33.296 [2024-11-20 17:01:48.630651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.296 [2024-11-20 17:01:48.630664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:33.296 [2024-11-20 17:01:48.630682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.296 [2024-11-20 17:01:48.630696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:33.296 [2024-11-20 17:01:48.630715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.296 [2024-11-20 17:01:48.630728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:33.296 [2024-11-20 17:01:48.630746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.296 [2024-11-20 17:01:48.630759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:33.296 [2024-11-20 17:01:48.630778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.296 [2024-11-20 17:01:48.630808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:33.296 [2024-11-20 17:01:48.630839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.296 [2024-11-20 17:01:48.630854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.296 [2024-11-20 17:01:48.630884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.296 [2024-11-20 17:01:48.630900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.296 [2024-11-20 17:01:48.630921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.296 [2024-11-20 17:01:48.630935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:33.296 [2024-11-20 17:01:48.630956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.296 [2024-11-20 17:01:48.630970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:33.296 [2024-11-20 17:01:48.630990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:108808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.296 [2024-11-20 17:01:48.631005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:33.296 [2024-11-20 17:01:48.631024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.296 [2024-11-20 17:01:48.631038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:33.296 [2024-11-20 17:01:48.631057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.296 [2024-11-20 17:01:48.631071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:33.296 [2024-11-20 17:01:48.631090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.296 [2024-11-20 17:01:48.631104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:33.296 [2024-11-20 17:01:48.631123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.296 [2024-11-20 17:01:48.631136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:33.296 [2024-11-20 17:01:48.631156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.296 [2024-11-20 17:01:48.631169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:33.296 [2024-11-20 17:01:48.631189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:108856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.296 [2024-11-20 17:01:48.631202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:33.296 [2024-11-20 17:01:48.631222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.296 [2024-11-20 17:01:48.631235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:33.296 [2024-11-20 17:01:48.631254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.296 [2024-11-20 17:01:48.631276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:33.296 [2024-11-20 17:01:48.631297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.296 [2024-11-20 17:01:48.631312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:33.296 [2024-11-20 17:01:48.631331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:33.296 [2024-11-20 17:01:48.631345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:33.296 8384.45 IOPS, 32.75 MiB/s [2024-11-20T17:01:51.832Z] 8419.62 IOPS, 32.89 MiB/s [2024-11-20T17:01:51.832Z] 8450.49 IOPS, 33.01 MiB/s [2024-11-20T17:01:51.832Z] Received shutdown signal, test time was about 35.753902 seconds 00:16:33.296 00:16:33.296 Latency(us) 00:16:33.296 [2024-11-20T17:01:51.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.296 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:33.296 Verification LBA range: start 0x0 length 0x4000 00:16:33.296 Nvme0n1 : 35.75 8466.28 33.07 0.00 0.00 15091.79 580.89 4087539.90 00:16:33.296 [2024-11-20T17:01:51.832Z] =================================================================================================================== 00:16:33.296 [2024-11-20T17:01:51.832Z] Total : 8466.28 33.07 0.00 0.00 15091.79 580.89 4087539.90 00:16:33.556 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:33.556 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:33.556 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:33.556 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:33.556 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:16:33.556 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:33.556 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:16:33.556 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:33.556 17:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:33.556 rmmod nvme_tcp 00:16:33.556 rmmod nvme_fabrics 00:16:33.556 rmmod nvme_keyring 00:16:33.556 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:33.556 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:16:33.556 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:16:33.556 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76482 ']' 00:16:33.556 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76482 00:16:33.556 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76482 ']' 00:16:33.556 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76482 00:16:33.556 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:16:33.556 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:33.556 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76482 00:16:33.556 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:33.556 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:33.556 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76482' 00:16:33.556 killing process with pid 76482 00:16:33.556 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76482 00:16:33.556 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76482 00:16:33.815 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:33.815 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:33.815 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:33.815 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:16:33.815 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:33.815 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:16:33.815 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:16:33.815 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:33.815 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:33.815 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:33.815 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:33.815 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:34.075 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:34.075 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:34.075 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:34.075 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:34.075 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:34.075 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:34.075 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:34.075 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:34.075 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:34.075 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:34.075 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:34.075 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.075 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:34.075 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.075 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:16:34.075 00:16:34.075 real 0m41.312s 00:16:34.075 user 2m14.368s 00:16:34.075 sys 0m12.252s 00:16:34.075 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.075 17:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:34.075 ************************************ 00:16:34.075 END TEST nvmf_host_multipath_status 00:16:34.075 ************************************ 00:16:34.075 17:01:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:34.075 17:01:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:34.075 17:01:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.075 17:01:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.075 ************************************ 00:16:34.075 START TEST nvmf_discovery_remove_ifc 00:16:34.075 ************************************ 00:16:34.075 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:34.335 * Looking for test storage... 00:16:34.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:34.335 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:34.335 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:16:34.335 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:34.335 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:34.335 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:34.335 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:34.335 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:34.335 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:16:34.335 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:16:34.335 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:16:34.335 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:16:34.335 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:16:34.335 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:16:34.335 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:16:34.335 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:34.335 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:16:34.335 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:34.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.336 --rc genhtml_branch_coverage=1 00:16:34.336 --rc genhtml_function_coverage=1 00:16:34.336 --rc genhtml_legend=1 00:16:34.336 --rc geninfo_all_blocks=1 00:16:34.336 --rc geninfo_unexecuted_blocks=1 00:16:34.336 00:16:34.336 ' 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:34.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.336 --rc genhtml_branch_coverage=1 00:16:34.336 --rc genhtml_function_coverage=1 00:16:34.336 --rc genhtml_legend=1 00:16:34.336 --rc geninfo_all_blocks=1 00:16:34.336 --rc geninfo_unexecuted_blocks=1 00:16:34.336 00:16:34.336 ' 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:34.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.336 --rc genhtml_branch_coverage=1 00:16:34.336 --rc genhtml_function_coverage=1 00:16:34.336 --rc genhtml_legend=1 00:16:34.336 --rc geninfo_all_blocks=1 00:16:34.336 --rc geninfo_unexecuted_blocks=1 00:16:34.336 00:16:34.336 ' 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:34.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.336 --rc genhtml_branch_coverage=1 00:16:34.336 --rc genhtml_function_coverage=1 00:16:34.336 --rc genhtml_legend=1 00:16:34.336 --rc geninfo_all_blocks=1 00:16:34.336 --rc geninfo_unexecuted_blocks=1 00:16:34.336 00:16:34.336 ' 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:34.336 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:34.336 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:34.337 Cannot find device "nvmf_init_br" 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:34.337 Cannot find device "nvmf_init_br2" 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:34.337 Cannot find device "nvmf_tgt_br" 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:34.337 Cannot find device "nvmf_tgt_br2" 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:34.337 Cannot find device "nvmf_init_br" 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:16:34.337 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:34.597 Cannot find device "nvmf_init_br2" 00:16:34.597 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:16:34.597 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:34.597 Cannot find device "nvmf_tgt_br" 00:16:34.597 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:16:34.597 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:34.597 Cannot find device "nvmf_tgt_br2" 00:16:34.597 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:16:34.597 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:34.597 Cannot find device "nvmf_br" 00:16:34.597 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:16:34.597 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:34.597 Cannot find device "nvmf_init_if" 00:16:34.597 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:16:34.597 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:34.597 Cannot find device "nvmf_init_if2" 00:16:34.597 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:16:34.597 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:34.597 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:34.597 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:16:34.597 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:34.597 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:34.597 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:16:34.597 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:34.597 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:34.597 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:34.597 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:34.597 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:34.597 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:34.597 17:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:34.598 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:34.598 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:34.598 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:34.598 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:34.598 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:34.598 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:34.598 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:34.598 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:34.598 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:34.598 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:34.598 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:34.598 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:34.598 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:34.598 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:34.598 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:34.598 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:34.598 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:34.598 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:34.598 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:34.858 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:34.858 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:16:34.858 00:16:34.858 --- 10.0.0.3 ping statistics --- 00:16:34.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.858 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:34.858 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:34.858 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:16:34.858 00:16:34.858 --- 10.0.0.4 ping statistics --- 00:16:34.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.858 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:34.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:34.858 00:16:34.858 --- 10.0.0.1 ping statistics --- 00:16:34.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.858 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:34.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:16:34.858 00:16:34.858 --- 10.0.0.2 ping statistics --- 00:16:34.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.858 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77393 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77393 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77393 ']' 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:34.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:34.858 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:34.858 [2024-11-20 17:01:53.270379] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:16:34.858 [2024-11-20 17:01:53.270493] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.118 [2024-11-20 17:01:53.424679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.118 [2024-11-20 17:01:53.485249] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.118 [2024-11-20 17:01:53.485321] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.118 [2024-11-20 17:01:53.485334] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.118 [2024-11-20 17:01:53.485345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.118 [2024-11-20 17:01:53.485354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.118 [2024-11-20 17:01:53.485841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.118 [2024-11-20 17:01:53.544013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:35.118 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:35.118 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:16:35.118 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:35.118 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:35.118 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:35.377 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.377 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:35.377 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.377 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:35.377 [2024-11-20 17:01:53.671484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.377 [2024-11-20 17:01:53.679633] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:16:35.377 null0 00:16:35.377 [2024-11-20 17:01:53.711561] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:35.377 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.377 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77413 00:16:35.377 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:35.377 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77413 /tmp/host.sock 00:16:35.377 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77413 ']' 00:16:35.377 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:16:35.377 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.377 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:35.377 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:35.377 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.377 17:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:35.377 [2024-11-20 17:01:53.795236] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:16:35.377 [2024-11-20 17:01:53.795335] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77413 ] 00:16:35.637 [2024-11-20 17:01:53.941707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.637 [2024-11-20 17:01:53.984243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.637 17:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:35.637 17:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:16:35.637 17:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:35.637 17:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:35.637 17:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.637 17:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:35.637 17:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.637 17:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:35.637 17:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.637 17:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:35.637 [2024-11-20 17:01:54.097091] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:35.637 17:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.637 17:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:35.637 17:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.637 17:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:37.054 [2024-11-20 17:01:55.154675] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:37.054 [2024-11-20 17:01:55.154710] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:37.054 [2024-11-20 17:01:55.154732] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:37.054 [2024-11-20 17:01:55.160723] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:37.054 [2024-11-20 17:01:55.215154] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:16:37.054 [2024-11-20 17:01:55.216446] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xd81fc0:1 started. 00:16:37.054 [2024-11-20 17:01:55.218925] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:37.054 [2024-11-20 17:01:55.219037] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:37.054 [2024-11-20 17:01:55.219085] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:37.054 [2024-11-20 17:01:55.219111] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:37.054 [2024-11-20 17:01:55.219154] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:37.054 17:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.054 17:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:37.054 17:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:37.054 [2024-11-20 17:01:55.223445] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xd81fc0 was disconnected and freed. delete nvme_qpair. 00:16:37.054 17:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:37.054 17:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.054 17:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:37.054 17:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:37.054 17:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:37.054 17:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:37.054 17:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.054 17:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:37.054 17:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:16:37.054 17:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:37.054 17:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:37.054 17:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:37.054 17:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:37.054 17:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:37.054 17:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.054 17:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:37.054 17:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:37.054 17:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:37.054 17:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.054 17:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:37.054 17:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:37.989 17:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:37.989 17:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:37.989 17:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.989 17:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:37.989 17:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:37.989 17:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:37.989 17:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:37.989 17:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.989 17:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:37.989 17:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:38.925 17:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:38.925 17:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:38.925 17:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:38.925 17:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.925 17:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:38.925 17:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:38.925 17:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:38.925 17:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.184 17:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:39.184 17:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:40.119 17:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:40.119 17:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:40.120 17:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.120 17:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:40.120 17:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:40.120 17:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:40.120 17:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:40.120 17:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.120 17:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:40.120 17:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:41.055 17:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:41.055 17:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:41.055 17:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.055 17:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:41.055 17:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:41.055 17:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:41.055 17:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:41.055 17:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.055 17:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:41.055 17:01:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:42.432 17:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:42.432 17:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:42.432 17:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.432 17:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:42.432 17:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:42.432 17:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:42.432 17:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:42.432 17:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.432 17:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:42.432 17:02:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:42.432 [2024-11-20 17:02:00.645771] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:42.432 [2024-11-20 17:02:00.645850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:42.432 [2024-11-20 17:02:00.645866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.432 [2024-11-20 17:02:00.645878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:42.432 [2024-11-20 17:02:00.645886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.432 [2024-11-20 17:02:00.645894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:42.432 [2024-11-20 17:02:00.645902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.432 [2024-11-20 17:02:00.645911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:42.432 [2024-11-20 17:02:00.645919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.432 [2024-11-20 17:02:00.645928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:42.432 [2024-11-20 17:02:00.645936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.432 [2024-11-20 17:02:00.645945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5e240 is same with the state(6) to be set 00:16:42.432 [2024-11-20 17:02:00.655765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5e240 (9): Bad file descriptor 00:16:42.432 [2024-11-20 17:02:00.665790] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:16:42.432 [2024-11-20 17:02:00.665813] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:16:42.432 [2024-11-20 17:02:00.665840] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:16:42.432 [2024-11-20 17:02:00.665846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:42.432 [2024-11-20 17:02:00.665881] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:16:43.368 17:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:43.368 17:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:43.368 17:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:43.368 17:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:43.368 17:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.368 17:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:43.368 17:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:43.368 [2024-11-20 17:02:01.693904] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:43.368 [2024-11-20 17:02:01.694340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd5e240 with addr=10.0.0.3, port=4420 00:16:43.368 [2024-11-20 17:02:01.694641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5e240 is same with the state(6) to be set 00:16:43.368 [2024-11-20 17:02:01.695013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5e240 (9): Bad file descriptor 00:16:43.368 [2024-11-20 17:02:01.695961] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:16:43.368 [2024-11-20 17:02:01.696045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:16:43.368 [2024-11-20 17:02:01.696072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:16:43.368 [2024-11-20 17:02:01.696097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:16:43.368 [2024-11-20 17:02:01.696118] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:16:43.368 [2024-11-20 17:02:01.696132] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:16:43.368 [2024-11-20 17:02:01.696144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:16:43.368 [2024-11-20 17:02:01.696167] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:16:43.368 [2024-11-20 17:02:01.696180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:43.368 17:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.368 17:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:43.368 17:02:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:44.305 [2024-11-20 17:02:02.696299] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:16:44.305 [2024-11-20 17:02:02.696618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:16:44.305 [2024-11-20 17:02:02.696780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:16:44.305 [2024-11-20 17:02:02.696923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:16:44.305 [2024-11-20 17:02:02.697081] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:16:44.305 [2024-11-20 17:02:02.697164] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:16:44.305 [2024-11-20 17:02:02.697335] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:16:44.305 [2024-11-20 17:02:02.697395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:16:44.305 [2024-11-20 17:02:02.697598] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:16:44.305 [2024-11-20 17:02:02.697827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.305 [2024-11-20 17:02:02.697973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.305 [2024-11-20 17:02:02.698152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.305 [2024-11-20 17:02:02.698223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.305 [2024-11-20 17:02:02.698421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.305 [2024-11-20 17:02:02.698484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.305 [2024-11-20 17:02:02.698664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.305 [2024-11-20 17:02:02.698820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.305 [2024-11-20 17:02:02.698996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.305 [2024-11-20 17:02:02.699009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.305 [2024-11-20 17:02:02.699020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:16:44.305 [2024-11-20 17:02:02.699067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9a20 (9): Bad file descriptor 00:16:44.305 [2024-11-20 17:02:02.700070] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:44.305 [2024-11-20 17:02:02.700092] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:16:44.305 17:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:44.305 17:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:44.305 17:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:44.305 17:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:44.305 17:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:44.305 17:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.305 17:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:44.305 17:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.305 17:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:44.305 17:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:44.305 17:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:44.305 17:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:44.305 17:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:44.305 17:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:44.305 17:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.305 17:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:44.305 17:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:44.305 17:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:44.305 17:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:44.305 17:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.565 17:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:44.565 17:02:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:45.515 17:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:45.515 17:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:45.515 17:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.515 17:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:45.515 17:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:45.515 17:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:45.515 17:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:45.515 17:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.515 17:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:45.515 17:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:46.472 [2024-11-20 17:02:04.709592] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:46.472 [2024-11-20 17:02:04.709820] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:46.472 [2024-11-20 17:02:04.709867] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:46.472 [2024-11-20 17:02:04.715636] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:16:46.472 [2024-11-20 17:02:04.770078] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:16:46.472 [2024-11-20 17:02:04.771063] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xd3ca60:1 started. 00:16:46.472 [2024-11-20 17:02:04.772683] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:46.472 [2024-11-20 17:02:04.772899] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:46.472 [2024-11-20 17:02:04.772970] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:46.472 [2024-11-20 17:02:04.773144] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:16:46.472 [2024-11-20 17:02:04.773280] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:46.472 [2024-11-20 17:02:04.778211] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xd3ca60 was disconnected and freed. delete nvme_qpair. 00:16:46.472 17:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:46.472 17:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:46.472 17:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:46.472 17:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.472 17:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:46.472 17:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:46.472 17:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:46.472 17:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.472 17:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:46.472 17:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:46.472 17:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77413 00:16:46.472 17:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77413 ']' 00:16:46.472 17:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77413 00:16:46.472 17:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:16:46.472 17:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.472 17:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77413 00:16:46.730 killing process with pid 77413 00:16:46.730 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:46.730 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:46.730 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77413' 00:16:46.730 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77413 00:16:46.730 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77413 00:16:46.730 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:46.730 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:46.730 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:16:46.989 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:46.989 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:16:46.989 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:46.990 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:46.990 rmmod nvme_tcp 00:16:46.990 rmmod nvme_fabrics 00:16:46.990 rmmod nvme_keyring 00:16:46.990 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:46.990 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:16:46.990 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:16:46.990 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77393 ']' 00:16:46.990 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77393 00:16:46.990 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77393 ']' 00:16:46.990 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77393 00:16:46.990 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:16:46.990 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.990 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77393 00:16:46.990 killing process with pid 77393 00:16:46.990 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:46.990 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:46.990 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77393' 00:16:46.990 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77393 00:16:46.990 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77393 00:16:47.249 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:47.249 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:47.249 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:47.249 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:16:47.249 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:16:47.249 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:47.249 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:16:47.249 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:47.249 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:47.249 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:47.249 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:47.249 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:47.249 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:47.249 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:47.249 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:47.249 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:47.249 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:47.249 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:47.249 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:47.249 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:47.249 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:47.509 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:47.509 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:47.509 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.509 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:47.509 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.509 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:16:47.509 00:16:47.509 real 0m13.261s 00:16:47.509 user 0m22.348s 00:16:47.509 sys 0m2.532s 00:16:47.509 ************************************ 00:16:47.509 END TEST nvmf_discovery_remove_ifc 00:16:47.509 ************************************ 00:16:47.509 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:47.509 17:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:47.509 17:02:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:47.509 17:02:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:47.509 17:02:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.509 17:02:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.509 ************************************ 00:16:47.509 START TEST nvmf_identify_kernel_target 00:16:47.509 ************************************ 00:16:47.509 17:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:47.509 * Looking for test storage... 00:16:47.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:47.509 17:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:47.509 17:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:16:47.509 17:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:47.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.769 --rc genhtml_branch_coverage=1 00:16:47.769 --rc genhtml_function_coverage=1 00:16:47.769 --rc genhtml_legend=1 00:16:47.769 --rc geninfo_all_blocks=1 00:16:47.769 --rc geninfo_unexecuted_blocks=1 00:16:47.769 00:16:47.769 ' 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:47.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.769 --rc genhtml_branch_coverage=1 00:16:47.769 --rc genhtml_function_coverage=1 00:16:47.769 --rc genhtml_legend=1 00:16:47.769 --rc geninfo_all_blocks=1 00:16:47.769 --rc geninfo_unexecuted_blocks=1 00:16:47.769 00:16:47.769 ' 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:47.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.769 --rc genhtml_branch_coverage=1 00:16:47.769 --rc genhtml_function_coverage=1 00:16:47.769 --rc genhtml_legend=1 00:16:47.769 --rc geninfo_all_blocks=1 00:16:47.769 --rc geninfo_unexecuted_blocks=1 00:16:47.769 00:16:47.769 ' 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:47.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.769 --rc genhtml_branch_coverage=1 00:16:47.769 --rc genhtml_function_coverage=1 00:16:47.769 --rc genhtml_legend=1 00:16:47.769 --rc geninfo_all_blocks=1 00:16:47.769 --rc geninfo_unexecuted_blocks=1 00:16:47.769 00:16:47.769 ' 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.769 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:47.770 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:47.770 Cannot find device "nvmf_init_br" 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:47.770 Cannot find device "nvmf_init_br2" 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:47.770 Cannot find device "nvmf_tgt_br" 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:47.770 Cannot find device "nvmf_tgt_br2" 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:47.770 Cannot find device "nvmf_init_br" 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:47.770 Cannot find device "nvmf_init_br2" 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:47.770 Cannot find device "nvmf_tgt_br" 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:47.770 Cannot find device "nvmf_tgt_br2" 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:47.770 Cannot find device "nvmf_br" 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:47.770 Cannot find device "nvmf_init_if" 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:16:47.770 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:48.029 Cannot find device "nvmf_init_if2" 00:16:48.029 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:16:48.029 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:48.029 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.029 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:48.030 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:48.030 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:48.030 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:16:48.030 00:16:48.030 --- 10.0.0.3 ping statistics --- 00:16:48.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.030 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:48.030 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:48.030 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:16:48.030 00:16:48.030 --- 10.0.0.4 ping statistics --- 00:16:48.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.030 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:48.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:48.030 00:16:48.030 --- 10.0.0.1 ping statistics --- 00:16:48.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.030 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:48.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:16:48.030 00:16:48.030 --- 10.0.0.2 ping statistics --- 00:16:48.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.030 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:48.030 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:48.289 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:48.289 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:48.289 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:16:48.289 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:48.289 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:48.289 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.289 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.289 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:48.289 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.289 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:48.289 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:48.289 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:48.289 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:48.289 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:48.289 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:48.289 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:16:48.289 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:48.289 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:48.289 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:48.289 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:16:48.289 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:16:48.289 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:16:48.290 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:48.290 17:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:48.549 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:48.549 Waiting for block devices as requested 00:16:48.549 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:48.808 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:48.808 No valid GPT data, bailing 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:48.808 No valid GPT data, bailing 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:16:48.808 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:49.067 No valid GPT data, bailing 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:49.067 No valid GPT data, bailing 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid=52629f53-148b-4998-bf25-3a0243614ae8 -a 10.0.0.1 -t tcp -s 4420 00:16:49.067 00:16:49.067 Discovery Log Number of Records 2, Generation counter 2 00:16:49.067 =====Discovery Log Entry 0====== 00:16:49.067 trtype: tcp 00:16:49.067 adrfam: ipv4 00:16:49.067 subtype: current discovery subsystem 00:16:49.067 treq: not specified, sq flow control disable supported 00:16:49.067 portid: 1 00:16:49.067 trsvcid: 4420 00:16:49.067 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:49.067 traddr: 10.0.0.1 00:16:49.067 eflags: none 00:16:49.067 sectype: none 00:16:49.067 =====Discovery Log Entry 1====== 00:16:49.067 trtype: tcp 00:16:49.067 adrfam: ipv4 00:16:49.067 subtype: nvme subsystem 00:16:49.067 treq: not specified, sq flow control disable supported 00:16:49.067 portid: 1 00:16:49.067 trsvcid: 4420 00:16:49.067 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:49.067 traddr: 10.0.0.1 00:16:49.067 eflags: none 00:16:49.067 sectype: none 00:16:49.067 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:49.067 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:16:49.326 ===================================================== 00:16:49.326 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:49.326 ===================================================== 00:16:49.326 Controller Capabilities/Features 00:16:49.326 ================================ 00:16:49.326 Vendor ID: 0000 00:16:49.326 Subsystem Vendor ID: 0000 00:16:49.326 Serial Number: 22675e76d04bad7fd161 00:16:49.326 Model Number: Linux 00:16:49.326 Firmware Version: 6.8.9-20 00:16:49.326 Recommended Arb Burst: 0 00:16:49.326 IEEE OUI Identifier: 00 00 00 00:16:49.326 Multi-path I/O 00:16:49.326 May have multiple subsystem ports: No 00:16:49.326 May have multiple controllers: No 00:16:49.326 Associated with SR-IOV VF: No 00:16:49.326 Max Data Transfer Size: Unlimited 00:16:49.326 Max Number of Namespaces: 0 00:16:49.326 Max Number of I/O Queues: 1024 00:16:49.326 NVMe Specification Version (VS): 1.3 00:16:49.326 NVMe Specification Version (Identify): 1.3 00:16:49.326 Maximum Queue Entries: 1024 00:16:49.326 Contiguous Queues Required: No 00:16:49.326 Arbitration Mechanisms Supported 00:16:49.326 Weighted Round Robin: Not Supported 00:16:49.326 Vendor Specific: Not Supported 00:16:49.326 Reset Timeout: 7500 ms 00:16:49.326 Doorbell Stride: 4 bytes 00:16:49.326 NVM Subsystem Reset: Not Supported 00:16:49.326 Command Sets Supported 00:16:49.326 NVM Command Set: Supported 00:16:49.326 Boot Partition: Not Supported 00:16:49.326 Memory Page Size Minimum: 4096 bytes 00:16:49.326 Memory Page Size Maximum: 4096 bytes 00:16:49.326 Persistent Memory Region: Not Supported 00:16:49.326 Optional Asynchronous Events Supported 00:16:49.326 Namespace Attribute Notices: Not Supported 00:16:49.326 Firmware Activation Notices: Not Supported 00:16:49.326 ANA Change Notices: Not Supported 00:16:49.326 PLE Aggregate Log Change Notices: Not Supported 00:16:49.326 LBA Status Info Alert Notices: Not Supported 00:16:49.326 EGE Aggregate Log Change Notices: Not Supported 00:16:49.326 Normal NVM Subsystem Shutdown event: Not Supported 00:16:49.326 Zone Descriptor Change Notices: Not Supported 00:16:49.326 Discovery Log Change Notices: Supported 00:16:49.326 Controller Attributes 00:16:49.326 128-bit Host Identifier: Not Supported 00:16:49.326 Non-Operational Permissive Mode: Not Supported 00:16:49.326 NVM Sets: Not Supported 00:16:49.326 Read Recovery Levels: Not Supported 00:16:49.326 Endurance Groups: Not Supported 00:16:49.326 Predictable Latency Mode: Not Supported 00:16:49.326 Traffic Based Keep ALive: Not Supported 00:16:49.326 Namespace Granularity: Not Supported 00:16:49.326 SQ Associations: Not Supported 00:16:49.326 UUID List: Not Supported 00:16:49.326 Multi-Domain Subsystem: Not Supported 00:16:49.326 Fixed Capacity Management: Not Supported 00:16:49.326 Variable Capacity Management: Not Supported 00:16:49.326 Delete Endurance Group: Not Supported 00:16:49.326 Delete NVM Set: Not Supported 00:16:49.326 Extended LBA Formats Supported: Not Supported 00:16:49.326 Flexible Data Placement Supported: Not Supported 00:16:49.326 00:16:49.326 Controller Memory Buffer Support 00:16:49.326 ================================ 00:16:49.326 Supported: No 00:16:49.326 00:16:49.326 Persistent Memory Region Support 00:16:49.326 ================================ 00:16:49.326 Supported: No 00:16:49.326 00:16:49.326 Admin Command Set Attributes 00:16:49.326 ============================ 00:16:49.326 Security Send/Receive: Not Supported 00:16:49.326 Format NVM: Not Supported 00:16:49.326 Firmware Activate/Download: Not Supported 00:16:49.326 Namespace Management: Not Supported 00:16:49.326 Device Self-Test: Not Supported 00:16:49.326 Directives: Not Supported 00:16:49.326 NVMe-MI: Not Supported 00:16:49.326 Virtualization Management: Not Supported 00:16:49.326 Doorbell Buffer Config: Not Supported 00:16:49.326 Get LBA Status Capability: Not Supported 00:16:49.326 Command & Feature Lockdown Capability: Not Supported 00:16:49.326 Abort Command Limit: 1 00:16:49.326 Async Event Request Limit: 1 00:16:49.326 Number of Firmware Slots: N/A 00:16:49.326 Firmware Slot 1 Read-Only: N/A 00:16:49.326 Firmware Activation Without Reset: N/A 00:16:49.326 Multiple Update Detection Support: N/A 00:16:49.326 Firmware Update Granularity: No Information Provided 00:16:49.326 Per-Namespace SMART Log: No 00:16:49.326 Asymmetric Namespace Access Log Page: Not Supported 00:16:49.326 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:49.326 Command Effects Log Page: Not Supported 00:16:49.326 Get Log Page Extended Data: Supported 00:16:49.326 Telemetry Log Pages: Not Supported 00:16:49.326 Persistent Event Log Pages: Not Supported 00:16:49.326 Supported Log Pages Log Page: May Support 00:16:49.326 Commands Supported & Effects Log Page: Not Supported 00:16:49.326 Feature Identifiers & Effects Log Page:May Support 00:16:49.326 NVMe-MI Commands & Effects Log Page: May Support 00:16:49.326 Data Area 4 for Telemetry Log: Not Supported 00:16:49.326 Error Log Page Entries Supported: 1 00:16:49.326 Keep Alive: Not Supported 00:16:49.326 00:16:49.326 NVM Command Set Attributes 00:16:49.326 ========================== 00:16:49.326 Submission Queue Entry Size 00:16:49.326 Max: 1 00:16:49.326 Min: 1 00:16:49.326 Completion Queue Entry Size 00:16:49.326 Max: 1 00:16:49.326 Min: 1 00:16:49.326 Number of Namespaces: 0 00:16:49.326 Compare Command: Not Supported 00:16:49.326 Write Uncorrectable Command: Not Supported 00:16:49.326 Dataset Management Command: Not Supported 00:16:49.326 Write Zeroes Command: Not Supported 00:16:49.326 Set Features Save Field: Not Supported 00:16:49.326 Reservations: Not Supported 00:16:49.326 Timestamp: Not Supported 00:16:49.327 Copy: Not Supported 00:16:49.327 Volatile Write Cache: Not Present 00:16:49.327 Atomic Write Unit (Normal): 1 00:16:49.327 Atomic Write Unit (PFail): 1 00:16:49.327 Atomic Compare & Write Unit: 1 00:16:49.327 Fused Compare & Write: Not Supported 00:16:49.327 Scatter-Gather List 00:16:49.327 SGL Command Set: Supported 00:16:49.327 SGL Keyed: Not Supported 00:16:49.327 SGL Bit Bucket Descriptor: Not Supported 00:16:49.327 SGL Metadata Pointer: Not Supported 00:16:49.327 Oversized SGL: Not Supported 00:16:49.327 SGL Metadata Address: Not Supported 00:16:49.327 SGL Offset: Supported 00:16:49.327 Transport SGL Data Block: Not Supported 00:16:49.327 Replay Protected Memory Block: Not Supported 00:16:49.327 00:16:49.327 Firmware Slot Information 00:16:49.327 ========================= 00:16:49.327 Active slot: 0 00:16:49.327 00:16:49.327 00:16:49.327 Error Log 00:16:49.327 ========= 00:16:49.327 00:16:49.327 Active Namespaces 00:16:49.327 ================= 00:16:49.327 Discovery Log Page 00:16:49.327 ================== 00:16:49.327 Generation Counter: 2 00:16:49.327 Number of Records: 2 00:16:49.327 Record Format: 0 00:16:49.327 00:16:49.327 Discovery Log Entry 0 00:16:49.327 ---------------------- 00:16:49.327 Transport Type: 3 (TCP) 00:16:49.327 Address Family: 1 (IPv4) 00:16:49.327 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:49.327 Entry Flags: 00:16:49.327 Duplicate Returned Information: 0 00:16:49.327 Explicit Persistent Connection Support for Discovery: 0 00:16:49.327 Transport Requirements: 00:16:49.327 Secure Channel: Not Specified 00:16:49.327 Port ID: 1 (0x0001) 00:16:49.327 Controller ID: 65535 (0xffff) 00:16:49.327 Admin Max SQ Size: 32 00:16:49.327 Transport Service Identifier: 4420 00:16:49.327 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:49.327 Transport Address: 10.0.0.1 00:16:49.327 Discovery Log Entry 1 00:16:49.327 ---------------------- 00:16:49.327 Transport Type: 3 (TCP) 00:16:49.327 Address Family: 1 (IPv4) 00:16:49.327 Subsystem Type: 2 (NVM Subsystem) 00:16:49.327 Entry Flags: 00:16:49.327 Duplicate Returned Information: 0 00:16:49.327 Explicit Persistent Connection Support for Discovery: 0 00:16:49.327 Transport Requirements: 00:16:49.327 Secure Channel: Not Specified 00:16:49.327 Port ID: 1 (0x0001) 00:16:49.327 Controller ID: 65535 (0xffff) 00:16:49.327 Admin Max SQ Size: 32 00:16:49.327 Transport Service Identifier: 4420 00:16:49.327 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:16:49.327 Transport Address: 10.0.0.1 00:16:49.327 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:16:49.587 get_feature(0x01) failed 00:16:49.587 get_feature(0x02) failed 00:16:49.587 get_feature(0x04) failed 00:16:49.587 ===================================================== 00:16:49.587 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:16:49.587 ===================================================== 00:16:49.587 Controller Capabilities/Features 00:16:49.587 ================================ 00:16:49.587 Vendor ID: 0000 00:16:49.587 Subsystem Vendor ID: 0000 00:16:49.587 Serial Number: 4ab2b8ded667a03f2bd6 00:16:49.587 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:16:49.587 Firmware Version: 6.8.9-20 00:16:49.587 Recommended Arb Burst: 6 00:16:49.587 IEEE OUI Identifier: 00 00 00 00:16:49.587 Multi-path I/O 00:16:49.587 May have multiple subsystem ports: Yes 00:16:49.587 May have multiple controllers: Yes 00:16:49.587 Associated with SR-IOV VF: No 00:16:49.587 Max Data Transfer Size: Unlimited 00:16:49.587 Max Number of Namespaces: 1024 00:16:49.587 Max Number of I/O Queues: 128 00:16:49.587 NVMe Specification Version (VS): 1.3 00:16:49.587 NVMe Specification Version (Identify): 1.3 00:16:49.587 Maximum Queue Entries: 1024 00:16:49.587 Contiguous Queues Required: No 00:16:49.587 Arbitration Mechanisms Supported 00:16:49.587 Weighted Round Robin: Not Supported 00:16:49.587 Vendor Specific: Not Supported 00:16:49.587 Reset Timeout: 7500 ms 00:16:49.587 Doorbell Stride: 4 bytes 00:16:49.587 NVM Subsystem Reset: Not Supported 00:16:49.587 Command Sets Supported 00:16:49.587 NVM Command Set: Supported 00:16:49.587 Boot Partition: Not Supported 00:16:49.587 Memory Page Size Minimum: 4096 bytes 00:16:49.587 Memory Page Size Maximum: 4096 bytes 00:16:49.587 Persistent Memory Region: Not Supported 00:16:49.587 Optional Asynchronous Events Supported 00:16:49.587 Namespace Attribute Notices: Supported 00:16:49.587 Firmware Activation Notices: Not Supported 00:16:49.587 ANA Change Notices: Supported 00:16:49.587 PLE Aggregate Log Change Notices: Not Supported 00:16:49.587 LBA Status Info Alert Notices: Not Supported 00:16:49.587 EGE Aggregate Log Change Notices: Not Supported 00:16:49.587 Normal NVM Subsystem Shutdown event: Not Supported 00:16:49.587 Zone Descriptor Change Notices: Not Supported 00:16:49.587 Discovery Log Change Notices: Not Supported 00:16:49.587 Controller Attributes 00:16:49.587 128-bit Host Identifier: Supported 00:16:49.587 Non-Operational Permissive Mode: Not Supported 00:16:49.587 NVM Sets: Not Supported 00:16:49.587 Read Recovery Levels: Not Supported 00:16:49.587 Endurance Groups: Not Supported 00:16:49.587 Predictable Latency Mode: Not Supported 00:16:49.587 Traffic Based Keep ALive: Supported 00:16:49.587 Namespace Granularity: Not Supported 00:16:49.587 SQ Associations: Not Supported 00:16:49.587 UUID List: Not Supported 00:16:49.587 Multi-Domain Subsystem: Not Supported 00:16:49.587 Fixed Capacity Management: Not Supported 00:16:49.587 Variable Capacity Management: Not Supported 00:16:49.587 Delete Endurance Group: Not Supported 00:16:49.587 Delete NVM Set: Not Supported 00:16:49.587 Extended LBA Formats Supported: Not Supported 00:16:49.587 Flexible Data Placement Supported: Not Supported 00:16:49.587 00:16:49.587 Controller Memory Buffer Support 00:16:49.587 ================================ 00:16:49.587 Supported: No 00:16:49.587 00:16:49.587 Persistent Memory Region Support 00:16:49.587 ================================ 00:16:49.587 Supported: No 00:16:49.587 00:16:49.587 Admin Command Set Attributes 00:16:49.587 ============================ 00:16:49.587 Security Send/Receive: Not Supported 00:16:49.587 Format NVM: Not Supported 00:16:49.587 Firmware Activate/Download: Not Supported 00:16:49.587 Namespace Management: Not Supported 00:16:49.587 Device Self-Test: Not Supported 00:16:49.587 Directives: Not Supported 00:16:49.587 NVMe-MI: Not Supported 00:16:49.587 Virtualization Management: Not Supported 00:16:49.587 Doorbell Buffer Config: Not Supported 00:16:49.587 Get LBA Status Capability: Not Supported 00:16:49.587 Command & Feature Lockdown Capability: Not Supported 00:16:49.587 Abort Command Limit: 4 00:16:49.587 Async Event Request Limit: 4 00:16:49.587 Number of Firmware Slots: N/A 00:16:49.587 Firmware Slot 1 Read-Only: N/A 00:16:49.587 Firmware Activation Without Reset: N/A 00:16:49.587 Multiple Update Detection Support: N/A 00:16:49.588 Firmware Update Granularity: No Information Provided 00:16:49.588 Per-Namespace SMART Log: Yes 00:16:49.588 Asymmetric Namespace Access Log Page: Supported 00:16:49.588 ANA Transition Time : 10 sec 00:16:49.588 00:16:49.588 Asymmetric Namespace Access Capabilities 00:16:49.588 ANA Optimized State : Supported 00:16:49.588 ANA Non-Optimized State : Supported 00:16:49.588 ANA Inaccessible State : Supported 00:16:49.588 ANA Persistent Loss State : Supported 00:16:49.588 ANA Change State : Supported 00:16:49.588 ANAGRPID is not changed : No 00:16:49.588 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:16:49.588 00:16:49.588 ANA Group Identifier Maximum : 128 00:16:49.588 Number of ANA Group Identifiers : 128 00:16:49.588 Max Number of Allowed Namespaces : 1024 00:16:49.588 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:16:49.588 Command Effects Log Page: Supported 00:16:49.588 Get Log Page Extended Data: Supported 00:16:49.588 Telemetry Log Pages: Not Supported 00:16:49.588 Persistent Event Log Pages: Not Supported 00:16:49.588 Supported Log Pages Log Page: May Support 00:16:49.588 Commands Supported & Effects Log Page: Not Supported 00:16:49.588 Feature Identifiers & Effects Log Page:May Support 00:16:49.588 NVMe-MI Commands & Effects Log Page: May Support 00:16:49.588 Data Area 4 for Telemetry Log: Not Supported 00:16:49.588 Error Log Page Entries Supported: 128 00:16:49.588 Keep Alive: Supported 00:16:49.588 Keep Alive Granularity: 1000 ms 00:16:49.588 00:16:49.588 NVM Command Set Attributes 00:16:49.588 ========================== 00:16:49.588 Submission Queue Entry Size 00:16:49.588 Max: 64 00:16:49.588 Min: 64 00:16:49.588 Completion Queue Entry Size 00:16:49.588 Max: 16 00:16:49.588 Min: 16 00:16:49.588 Number of Namespaces: 1024 00:16:49.588 Compare Command: Not Supported 00:16:49.588 Write Uncorrectable Command: Not Supported 00:16:49.588 Dataset Management Command: Supported 00:16:49.588 Write Zeroes Command: Supported 00:16:49.588 Set Features Save Field: Not Supported 00:16:49.588 Reservations: Not Supported 00:16:49.588 Timestamp: Not Supported 00:16:49.588 Copy: Not Supported 00:16:49.588 Volatile Write Cache: Present 00:16:49.588 Atomic Write Unit (Normal): 1 00:16:49.588 Atomic Write Unit (PFail): 1 00:16:49.588 Atomic Compare & Write Unit: 1 00:16:49.588 Fused Compare & Write: Not Supported 00:16:49.588 Scatter-Gather List 00:16:49.588 SGL Command Set: Supported 00:16:49.588 SGL Keyed: Not Supported 00:16:49.588 SGL Bit Bucket Descriptor: Not Supported 00:16:49.588 SGL Metadata Pointer: Not Supported 00:16:49.588 Oversized SGL: Not Supported 00:16:49.588 SGL Metadata Address: Not Supported 00:16:49.588 SGL Offset: Supported 00:16:49.588 Transport SGL Data Block: Not Supported 00:16:49.588 Replay Protected Memory Block: Not Supported 00:16:49.588 00:16:49.588 Firmware Slot Information 00:16:49.588 ========================= 00:16:49.588 Active slot: 0 00:16:49.588 00:16:49.588 Asymmetric Namespace Access 00:16:49.588 =========================== 00:16:49.588 Change Count : 0 00:16:49.588 Number of ANA Group Descriptors : 1 00:16:49.588 ANA Group Descriptor : 0 00:16:49.588 ANA Group ID : 1 00:16:49.588 Number of NSID Values : 1 00:16:49.588 Change Count : 0 00:16:49.588 ANA State : 1 00:16:49.588 Namespace Identifier : 1 00:16:49.588 00:16:49.588 Commands Supported and Effects 00:16:49.588 ============================== 00:16:49.588 Admin Commands 00:16:49.588 -------------- 00:16:49.588 Get Log Page (02h): Supported 00:16:49.588 Identify (06h): Supported 00:16:49.588 Abort (08h): Supported 00:16:49.588 Set Features (09h): Supported 00:16:49.588 Get Features (0Ah): Supported 00:16:49.588 Asynchronous Event Request (0Ch): Supported 00:16:49.588 Keep Alive (18h): Supported 00:16:49.588 I/O Commands 00:16:49.588 ------------ 00:16:49.588 Flush (00h): Supported 00:16:49.588 Write (01h): Supported LBA-Change 00:16:49.588 Read (02h): Supported 00:16:49.588 Write Zeroes (08h): Supported LBA-Change 00:16:49.588 Dataset Management (09h): Supported 00:16:49.588 00:16:49.588 Error Log 00:16:49.588 ========= 00:16:49.588 Entry: 0 00:16:49.588 Error Count: 0x3 00:16:49.588 Submission Queue Id: 0x0 00:16:49.588 Command Id: 0x5 00:16:49.588 Phase Bit: 0 00:16:49.588 Status Code: 0x2 00:16:49.588 Status Code Type: 0x0 00:16:49.588 Do Not Retry: 1 00:16:49.588 Error Location: 0x28 00:16:49.588 LBA: 0x0 00:16:49.588 Namespace: 0x0 00:16:49.588 Vendor Log Page: 0x0 00:16:49.588 ----------- 00:16:49.588 Entry: 1 00:16:49.588 Error Count: 0x2 00:16:49.588 Submission Queue Id: 0x0 00:16:49.588 Command Id: 0x5 00:16:49.588 Phase Bit: 0 00:16:49.588 Status Code: 0x2 00:16:49.588 Status Code Type: 0x0 00:16:49.588 Do Not Retry: 1 00:16:49.588 Error Location: 0x28 00:16:49.588 LBA: 0x0 00:16:49.588 Namespace: 0x0 00:16:49.588 Vendor Log Page: 0x0 00:16:49.588 ----------- 00:16:49.588 Entry: 2 00:16:49.588 Error Count: 0x1 00:16:49.588 Submission Queue Id: 0x0 00:16:49.588 Command Id: 0x4 00:16:49.588 Phase Bit: 0 00:16:49.588 Status Code: 0x2 00:16:49.588 Status Code Type: 0x0 00:16:49.588 Do Not Retry: 1 00:16:49.588 Error Location: 0x28 00:16:49.588 LBA: 0x0 00:16:49.588 Namespace: 0x0 00:16:49.588 Vendor Log Page: 0x0 00:16:49.588 00:16:49.588 Number of Queues 00:16:49.588 ================ 00:16:49.588 Number of I/O Submission Queues: 128 00:16:49.588 Number of I/O Completion Queues: 128 00:16:49.588 00:16:49.588 ZNS Specific Controller Data 00:16:49.588 ============================ 00:16:49.588 Zone Append Size Limit: 0 00:16:49.588 00:16:49.588 00:16:49.588 Active Namespaces 00:16:49.588 ================= 00:16:49.588 get_feature(0x05) failed 00:16:49.588 Namespace ID:1 00:16:49.588 Command Set Identifier: NVM (00h) 00:16:49.588 Deallocate: Supported 00:16:49.588 Deallocated/Unwritten Error: Not Supported 00:16:49.588 Deallocated Read Value: Unknown 00:16:49.588 Deallocate in Write Zeroes: Not Supported 00:16:49.588 Deallocated Guard Field: 0xFFFF 00:16:49.588 Flush: Supported 00:16:49.588 Reservation: Not Supported 00:16:49.588 Namespace Sharing Capabilities: Multiple Controllers 00:16:49.588 Size (in LBAs): 1310720 (5GiB) 00:16:49.588 Capacity (in LBAs): 1310720 (5GiB) 00:16:49.588 Utilization (in LBAs): 1310720 (5GiB) 00:16:49.588 UUID: 3f1c2750-e82d-4f21-9a76-37865e6c7b43 00:16:49.588 Thin Provisioning: Not Supported 00:16:49.588 Per-NS Atomic Units: Yes 00:16:49.589 Atomic Boundary Size (Normal): 0 00:16:49.589 Atomic Boundary Size (PFail): 0 00:16:49.589 Atomic Boundary Offset: 0 00:16:49.589 NGUID/EUI64 Never Reused: No 00:16:49.589 ANA group ID: 1 00:16:49.589 Namespace Write Protected: No 00:16:49.589 Number of LBA Formats: 1 00:16:49.589 Current LBA Format: LBA Format #00 00:16:49.589 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:16:49.589 00:16:49.589 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:16:49.589 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:49.589 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:16:49.589 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:49.589 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:16:49.589 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:49.589 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:49.589 rmmod nvme_tcp 00:16:49.589 rmmod nvme_fabrics 00:16:49.589 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:49.589 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:16:49.589 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:16:49.589 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:16:49.589 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:49.589 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:49.589 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:49.589 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:16:49.589 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:16:49.589 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:49.589 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:16:49.589 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:49.589 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:49.589 17:02:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:49.589 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:49.589 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:49.589 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:49.589 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:49.589 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:49.589 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:49.589 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:49.589 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:49.848 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:49.848 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:49.848 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:49.848 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:49.848 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:49.848 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.848 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:49.848 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.848 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:16:49.848 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:16:49.848 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:16:49.848 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:16:49.848 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:49.848 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:49.848 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:49.848 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:49.848 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:16:49.848 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:16:49.848 17:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:50.784 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:50.784 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:50.784 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:50.784 ************************************ 00:16:50.784 END TEST nvmf_identify_kernel_target 00:16:50.784 ************************************ 00:16:50.784 00:16:50.784 real 0m3.327s 00:16:50.784 user 0m1.203s 00:16:50.784 sys 0m1.513s 00:16:50.784 17:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.784 17:02:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.784 17:02:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:50.784 17:02:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:50.784 17:02:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.784 17:02:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.784 ************************************ 00:16:50.784 START TEST nvmf_auth_host 00:16:50.784 ************************************ 00:16:50.784 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:51.044 * Looking for test storage... 00:16:51.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:51.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.044 --rc genhtml_branch_coverage=1 00:16:51.044 --rc genhtml_function_coverage=1 00:16:51.044 --rc genhtml_legend=1 00:16:51.044 --rc geninfo_all_blocks=1 00:16:51.044 --rc geninfo_unexecuted_blocks=1 00:16:51.044 00:16:51.044 ' 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:51.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.044 --rc genhtml_branch_coverage=1 00:16:51.044 --rc genhtml_function_coverage=1 00:16:51.044 --rc genhtml_legend=1 00:16:51.044 --rc geninfo_all_blocks=1 00:16:51.044 --rc geninfo_unexecuted_blocks=1 00:16:51.044 00:16:51.044 ' 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:51.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.044 --rc genhtml_branch_coverage=1 00:16:51.044 --rc genhtml_function_coverage=1 00:16:51.044 --rc genhtml_legend=1 00:16:51.044 --rc geninfo_all_blocks=1 00:16:51.044 --rc geninfo_unexecuted_blocks=1 00:16:51.044 00:16:51.044 ' 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:51.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.044 --rc genhtml_branch_coverage=1 00:16:51.044 --rc genhtml_function_coverage=1 00:16:51.044 --rc genhtml_legend=1 00:16:51.044 --rc geninfo_all_blocks=1 00:16:51.044 --rc geninfo_unexecuted_blocks=1 00:16:51.044 00:16:51.044 ' 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.044 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:51.045 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:51.045 Cannot find device "nvmf_init_br" 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:51.045 Cannot find device "nvmf_init_br2" 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:16:51.045 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:51.304 Cannot find device "nvmf_tgt_br" 00:16:51.304 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:16:51.304 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:51.304 Cannot find device "nvmf_tgt_br2" 00:16:51.304 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:16:51.304 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:51.304 Cannot find device "nvmf_init_br" 00:16:51.304 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:16:51.304 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:51.304 Cannot find device "nvmf_init_br2" 00:16:51.304 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:16:51.304 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:51.304 Cannot find device "nvmf_tgt_br" 00:16:51.304 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:16:51.305 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:51.305 Cannot find device "nvmf_tgt_br2" 00:16:51.305 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:16:51.305 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:51.305 Cannot find device "nvmf_br" 00:16:51.305 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:16:51.305 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:51.305 Cannot find device "nvmf_init_if" 00:16:51.305 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:16:51.305 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:51.305 Cannot find device "nvmf_init_if2" 00:16:51.305 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:16:51.305 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:51.305 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:51.305 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:16:51.305 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:51.305 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:51.305 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:16:51.305 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:51.305 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:51.305 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:51.305 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:51.305 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:51.305 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:51.305 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:51.305 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:51.305 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:51.305 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:51.305 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:51.305 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:51.564 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:51.564 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:51.564 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:51.564 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:51.564 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:51.564 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:51.564 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:51.564 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:51.564 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:51.564 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:51.564 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:51.564 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:51.564 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:51.564 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:51.564 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:51.564 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:51.564 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:51.564 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:51.564 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:51.564 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:51.564 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:51.564 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:51.564 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:16:51.564 00:16:51.564 --- 10.0.0.3 ping statistics --- 00:16:51.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.564 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:16:51.564 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:51.564 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:51.564 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:16:51.564 00:16:51.564 --- 10.0.0.4 ping statistics --- 00:16:51.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.564 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:51.564 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:51.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:51.564 00:16:51.564 --- 10.0.0.1 ping statistics --- 00:16:51.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.564 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:51.564 17:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:51.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:16:51.564 00:16:51.564 --- 10.0.0.2 ping statistics --- 00:16:51.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.564 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:16:51.564 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.564 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:16:51.564 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:51.564 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.564 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:51.564 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:51.564 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.564 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:51.564 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:51.564 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:16:51.564 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:51.564 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:51.564 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.564 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78407 00:16:51.564 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:16:51.564 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78407 00:16:51.564 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78407 ']' 00:16:51.564 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.564 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.564 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.564 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.564 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=04f6908109ae0cb9bc65dcbc1eac4a38 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.L4y 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 04f6908109ae0cb9bc65dcbc1eac4a38 0 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 04f6908109ae0cb9bc65dcbc1eac4a38 0 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=04f6908109ae0cb9bc65dcbc1eac4a38 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.L4y 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.L4y 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.L4y 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:16:52.132 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a28b872cf8543324fe4fc11d63114ecfeaf411eb1f0f9be8c945261c660de3ac 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.coq 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a28b872cf8543324fe4fc11d63114ecfeaf411eb1f0f9be8c945261c660de3ac 3 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a28b872cf8543324fe4fc11d63114ecfeaf411eb1f0f9be8c945261c660de3ac 3 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a28b872cf8543324fe4fc11d63114ecfeaf411eb1f0f9be8c945261c660de3ac 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.coq 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.coq 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.coq 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=95ccdffe4baceeabf2500603e8dfabb475cb5efe267771d8 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.H2P 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 95ccdffe4baceeabf2500603e8dfabb475cb5efe267771d8 0 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 95ccdffe4baceeabf2500603e8dfabb475cb5efe267771d8 0 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=95ccdffe4baceeabf2500603e8dfabb475cb5efe267771d8 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.H2P 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.H2P 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.H2P 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=779ecee6bee58bd61da6d64c04bffc55600ed84c42b9386e 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.0S9 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 779ecee6bee58bd61da6d64c04bffc55600ed84c42b9386e 2 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 779ecee6bee58bd61da6d64c04bffc55600ed84c42b9386e 2 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=779ecee6bee58bd61da6d64c04bffc55600ed84c42b9386e 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:16:52.392 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:52.393 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.0S9 00:16:52.393 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.0S9 00:16:52.393 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.0S9 00:16:52.393 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:52.393 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:52.393 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:52.393 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:52.393 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:16:52.393 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:52.393 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:52.393 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3da92ddfc018d793a9b35a4a61411682 00:16:52.393 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:52.393 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.KOB 00:16:52.393 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3da92ddfc018d793a9b35a4a61411682 1 00:16:52.393 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3da92ddfc018d793a9b35a4a61411682 1 00:16:52.393 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:52.393 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:52.393 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3da92ddfc018d793a9b35a4a61411682 00:16:52.393 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:16:52.393 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:52.652 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.KOB 00:16:52.652 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.KOB 00:16:52.652 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.KOB 00:16:52.652 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:52.652 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:52.652 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:52.652 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:52.652 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:16:52.652 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:52.652 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:52.652 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d048da740cd134f4053f9d761bb09b77 00:16:52.652 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:52.652 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Ocu 00:16:52.652 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d048da740cd134f4053f9d761bb09b77 1 00:16:52.652 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d048da740cd134f4053f9d761bb09b77 1 00:16:52.652 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:52.652 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:52.652 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d048da740cd134f4053f9d761bb09b77 00:16:52.652 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:16:52.652 17:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Ocu 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Ocu 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Ocu 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d36b45c2bd1d1c5039afb587d8bc366a25499e9cfc149c5c 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.DKE 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d36b45c2bd1d1c5039afb587d8bc366a25499e9cfc149c5c 2 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d36b45c2bd1d1c5039afb587d8bc366a25499e9cfc149c5c 2 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d36b45c2bd1d1c5039afb587d8bc366a25499e9cfc149c5c 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.DKE 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.DKE 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.DKE 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f0223874ad892717383fb132388dbb1a 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.53a 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f0223874ad892717383fb132388dbb1a 0 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f0223874ad892717383fb132388dbb1a 0 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f0223874ad892717383fb132388dbb1a 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:16:52.652 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:52.653 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.53a 00:16:52.653 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.53a 00:16:52.653 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.53a 00:16:52.653 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:16:52.653 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:52.653 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:52.653 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:52.653 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:16:52.653 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:16:52.653 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:52.911 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8b9944f62fe6dfe269a9d29d6b32563a753a025f1dfb4ad8a331272cee32bd26 00:16:52.911 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:52.911 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.sAD 00:16:52.911 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8b9944f62fe6dfe269a9d29d6b32563a753a025f1dfb4ad8a331272cee32bd26 3 00:16:52.911 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8b9944f62fe6dfe269a9d29d6b32563a753a025f1dfb4ad8a331272cee32bd26 3 00:16:52.911 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:52.911 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:52.911 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8b9944f62fe6dfe269a9d29d6b32563a753a025f1dfb4ad8a331272cee32bd26 00:16:52.911 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:16:52.911 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:52.911 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.sAD 00:16:52.911 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.sAD 00:16:52.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.911 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.sAD 00:16:52.911 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:16:52.911 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78407 00:16:52.911 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78407 ']' 00:16:52.911 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.911 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:52.911 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.911 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:52.911 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.L4y 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.coq ]] 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.coq 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.H2P 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.0S9 ]] 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0S9 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.KOB 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Ocu ]] 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ocu 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.DKE 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.53a ]] 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.53a 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.sAD 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.176 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:53.177 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:53.177 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:53.177 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:16:53.177 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:16:53.177 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:16:53.177 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:53.177 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:53.177 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:53.177 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:16:53.177 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:16:53.177 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:16:53.446 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:53.446 17:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:53.705 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:53.705 Waiting for block devices as requested 00:16:53.705 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:53.964 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:54.531 No valid GPT data, bailing 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:54.531 No valid GPT data, bailing 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:16:54.531 17:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:54.531 No valid GPT data, bailing 00:16:54.531 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:54.531 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:54.531 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:54.531 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:16:54.531 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:54.531 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:54.531 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:16:54.531 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:16:54.531 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:54.531 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:54.531 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:16:54.531 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:16:54.531 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:54.791 No valid GPT data, bailing 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid=52629f53-148b-4998-bf25-3a0243614ae8 -a 10.0.0.1 -t tcp -s 4420 00:16:54.791 00:16:54.791 Discovery Log Number of Records 2, Generation counter 2 00:16:54.791 =====Discovery Log Entry 0====== 00:16:54.791 trtype: tcp 00:16:54.791 adrfam: ipv4 00:16:54.791 subtype: current discovery subsystem 00:16:54.791 treq: not specified, sq flow control disable supported 00:16:54.791 portid: 1 00:16:54.791 trsvcid: 4420 00:16:54.791 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:54.791 traddr: 10.0.0.1 00:16:54.791 eflags: none 00:16:54.791 sectype: none 00:16:54.791 =====Discovery Log Entry 1====== 00:16:54.791 trtype: tcp 00:16:54.791 adrfam: ipv4 00:16:54.791 subtype: nvme subsystem 00:16:54.791 treq: not specified, sq flow control disable supported 00:16:54.791 portid: 1 00:16:54.791 trsvcid: 4420 00:16:54.791 subnqn: nqn.2024-02.io.spdk:cnode0 00:16:54.791 traddr: 10.0.0.1 00:16:54.791 eflags: none 00:16:54.791 sectype: none 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: ]] 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.791 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.050 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.050 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.050 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.051 nvme0n1 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: ]] 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.051 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.310 nvme0n1 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: ]] 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.310 nvme0n1 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.310 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: ]] 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.569 nvme0n1 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.569 17:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: ]] 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:55.569 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:55.570 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.570 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.903 nvme0n1 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.903 nvme0n1 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:55.903 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: ]] 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.470 nvme0n1 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:56.470 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: ]] 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.471 17:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.730 nvme0n1 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: ]] 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.730 nvme0n1 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.730 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: ]] 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.989 nvme0n1 00:16:56.989 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.990 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.990 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.990 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.990 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.990 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.990 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.990 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.990 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.990 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.248 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.248 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.248 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:16:57.248 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.248 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:57.248 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:57.248 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.249 nvme0n1 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:57.249 17:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: ]] 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.185 nvme0n1 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.185 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: ]] 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.186 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.445 nvme0n1 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: ]] 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.445 17:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.704 nvme0n1 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: ]] 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.704 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.963 nvme0n1 00:16:58.963 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.963 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.963 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.963 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.963 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.963 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.964 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.224 nvme0n1 00:16:59.224 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.224 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.224 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.224 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.224 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.224 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.224 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.224 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.224 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.224 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.224 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.224 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.224 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.224 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:16:59.224 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.224 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:59.224 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:59.224 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:59.224 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:16:59.224 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:16:59.224 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:59.224 17:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: ]] 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.134 nvme0n1 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: ]] 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.134 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.408 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.408 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.408 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:01.408 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:01.408 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:01.408 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.408 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.408 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:01.408 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.408 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:01.408 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:01.408 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:01.408 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.408 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.408 17:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.667 nvme0n1 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: ]] 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.667 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.233 nvme0n1 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: ]] 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.233 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:02.234 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.234 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:02.234 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:02.234 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:02.234 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:02.234 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.234 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.492 nvme0n1 00:17:02.492 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.492 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.492 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.492 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.492 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.492 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.492 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.492 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.492 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.492 17:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.492 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.492 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.492 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:02.492 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.492 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:02.492 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:02.492 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:02.492 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:17:02.492 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:02.492 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:02.492 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:02.492 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:17:02.492 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:02.492 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:02.492 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.492 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:02.492 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:02.492 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:02.492 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.492 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:02.492 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.492 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.492 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.493 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.493 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:02.493 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:02.493 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:02.493 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.493 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.493 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:02.493 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.493 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:02.493 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:02.493 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:02.493 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:02.493 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.493 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.059 nvme0n1 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: ]] 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.059 17:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.625 nvme0n1 00:17:03.625 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.625 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.625 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.626 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.626 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.626 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.626 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.626 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.626 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.626 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.884 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.884 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.884 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:03.884 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.884 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:03.884 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:03.884 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:03.884 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:03.884 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:03.884 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:03.884 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:03.884 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:03.884 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: ]] 00:17:03.884 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:03.884 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:03.885 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.885 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:03.885 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:03.885 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:03.885 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.885 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:03.885 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.885 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.885 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.885 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.885 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:03.885 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:03.885 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:03.885 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.885 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.885 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:03.885 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.885 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:03.885 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:03.885 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:03.885 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.885 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.885 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.452 nvme0n1 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: ]] 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.452 17:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.019 nvme0n1 00:17:05.019 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.277 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.277 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.277 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.277 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.277 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.277 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.277 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.277 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.277 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.277 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.277 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.277 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:05.277 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.277 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:05.277 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:05.277 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:05.277 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:17:05.277 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:17:05.277 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:05.277 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:05.277 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: ]] 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.278 17:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.844 nvme0n1 00:17:05.844 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.844 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.844 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.844 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.844 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.844 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.844 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.844 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.845 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.411 nvme0n1 00:17:06.411 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.411 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.411 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.411 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.411 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.411 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.411 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.411 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.411 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.411 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: ]] 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.670 17:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.670 nvme0n1 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: ]] 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.670 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.929 nvme0n1 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: ]] 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:06.929 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.930 nvme0n1 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.930 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: ]] 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.189 nvme0n1 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.189 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.448 nvme0n1 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: ]] 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.448 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:07.449 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:07.449 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:07.449 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.449 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:07.449 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.449 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.449 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.449 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.449 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:07.449 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:07.449 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:07.449 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.449 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.449 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:07.449 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.449 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:07.449 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:07.449 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:07.449 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.449 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.449 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.708 nvme0n1 00:17:07.708 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.708 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.708 17:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: ]] 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:07.708 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:07.709 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.709 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.709 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.709 nvme0n1 00:17:07.709 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.709 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.709 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.709 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.709 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.709 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: ]] 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.968 nvme0n1 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.968 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: ]] 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.252 nvme0n1 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:17:08.252 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:08.253 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:08.253 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.253 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:08.253 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:08.253 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:08.253 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.253 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:08.253 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.253 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.253 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.253 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.253 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.253 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.253 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.253 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.253 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.253 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.253 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.253 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.253 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.253 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.253 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:08.253 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.253 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.513 nvme0n1 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: ]] 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:08.513 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:08.514 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:08.514 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.514 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:08.514 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.514 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.514 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.514 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.514 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.514 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.514 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.514 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.514 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.514 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.514 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.514 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.514 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.514 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.514 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.514 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.514 17:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.774 nvme0n1 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: ]] 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.774 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:08.775 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.775 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.775 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.775 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.775 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.775 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.775 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.775 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.775 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.775 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.775 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.775 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.775 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.775 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.775 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.775 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.775 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.034 nvme0n1 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: ]] 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.034 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.035 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.035 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.035 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.035 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.035 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.035 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.035 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.035 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.035 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.035 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.035 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.035 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.035 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.035 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.035 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.294 nvme0n1 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: ]] 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.294 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.554 nvme0n1 00:17:09.554 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.554 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.554 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.554 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.554 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.554 17:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.554 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.814 nvme0n1 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: ]] 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.814 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.382 nvme0n1 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: ]] 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.382 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:10.383 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:10.383 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:10.383 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.383 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.383 17:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.950 nvme0n1 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: ]] 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.950 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.209 nvme0n1 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: ]] 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:11.209 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.210 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:11.210 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:11.210 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:11.210 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.210 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:11.210 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.210 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.210 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.210 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.210 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.210 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.210 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.210 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.210 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.210 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.210 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.210 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.210 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.210 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.210 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:11.210 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.210 17:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.778 nvme0n1 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.778 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.346 nvme0n1 00:17:12.346 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.346 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: ]] 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.347 17:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.915 nvme0n1 00:17:12.915 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.915 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.915 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.915 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.915 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.915 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.915 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.915 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.915 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.915 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.915 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.915 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.915 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:12.915 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.915 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:12.915 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:12.915 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: ]] 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.916 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.483 nvme0n1 00:17:13.483 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.483 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.483 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.483 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.483 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.483 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.483 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: ]] 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.484 17:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.053 nvme0n1 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: ]] 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.053 17:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.621 nvme0n1 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.621 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.200 nvme0n1 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: ]] 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:15.200 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.201 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:15.201 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.201 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.466 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.466 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.466 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:15.466 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:15.466 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:15.466 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.466 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.466 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:15.466 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.466 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:15.466 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:15.466 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:15.466 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.466 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.466 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.466 nvme0n1 00:17:15.466 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: ]] 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.467 17:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.726 nvme0n1 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: ]] 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:15.726 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.727 nvme0n1 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: ]] 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:15.727 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.986 nvme0n1 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.986 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.246 nvme0n1 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: ]] 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.246 nvme0n1 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.246 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.505 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.505 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.505 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.505 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.505 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.505 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.505 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.505 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:16.505 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.505 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:16.505 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:16.505 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:16.505 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:16.505 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:16.505 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:16.505 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:16.505 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:16.505 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: ]] 00:17:16.505 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:16.505 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:16.505 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.506 nvme0n1 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.506 17:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: ]] 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.506 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.765 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.765 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.765 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.765 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.765 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.765 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.765 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.765 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.765 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.765 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.765 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.765 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.765 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.765 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.765 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.765 nvme0n1 00:17:16.765 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.765 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.765 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.765 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.765 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.765 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.765 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.765 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: ]] 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.766 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.025 nvme0n1 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.025 nvme0n1 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.025 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: ]] 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.284 nvme0n1 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.284 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: ]] 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.543 17:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.802 nvme0n1 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: ]] 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.802 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.060 nvme0n1 00:17:18.060 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.060 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.060 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.060 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.060 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.060 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.060 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: ]] 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.061 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.319 nvme0n1 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.319 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.578 nvme0n1 00:17:18.578 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.578 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.578 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.578 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.578 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: ]] 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.579 17:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.838 nvme0n1 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: ]] 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.838 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.839 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.406 nvme0n1 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: ]] 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.406 17:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.666 nvme0n1 00:17:19.666 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.666 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.666 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.666 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.666 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.666 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.666 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.666 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.666 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.666 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.666 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.666 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.666 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:19.666 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.666 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:19.666 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:19.666 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:19.666 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:17:19.666 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:17:19.666 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: ]] 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.667 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.236 nvme0n1 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.236 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.495 nvme0n1 00:17:20.495 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.495 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.495 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.495 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.495 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.495 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.495 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.495 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.495 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.495 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.495 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.495 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.495 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.495 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:20.495 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.495 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:20.495 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:20.495 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNjkwODEwOWFlMGNiOWJjNjVkY2JjMWVhYzRhMziP7dUY: 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: ]] 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTI4Yjg3MmNmODU0MzMyNGZlNGZjMTFkNjMxMTRlY2ZlYWY0MTFlYjFmMGY5YmU4Yzk0NTI2MWM2NjBkZTNhYwUAUbo=: 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.496 17:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.064 nvme0n1 00:17:21.064 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.064 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.064 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.064 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.064 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: ]] 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.324 17:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.891 nvme0n1 00:17:21.891 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.891 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.891 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.891 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.891 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: ]] 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.151 17:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.748 nvme0n1 00:17:22.748 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.748 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.748 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.748 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.748 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.748 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.016 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.016 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.016 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.016 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.017 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.017 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.017 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:23.017 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.017 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:23.017 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:23.017 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:23.017 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:17:23.017 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:17:23.017 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:23.017 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:23.017 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDM2YjQ1YzJiZDFkMWM1MDM5YWZiNTg3ZDhiYzM2NmEyNTQ5OWU5Y2ZjMTQ5YzVjmCxyQQ==: 00:17:23.017 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: ]] 00:17:23.017 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjAyMjM4NzRhZDg5MjcxNzM4M2ZiMTMyMzg4ZGJiMWGtc+s8: 00:17:23.017 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:23.017 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.017 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:23.018 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:23.018 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:23.018 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.018 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:23.018 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.018 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.018 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.018 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.018 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:23.018 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:23.018 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:23.018 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.018 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.018 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:23.018 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.018 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:23.018 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:23.018 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:23.018 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:23.018 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.018 17:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.956 nvme0n1 00:17:23.956 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.956 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.956 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.956 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.956 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.956 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.956 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.956 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.956 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.956 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.956 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.956 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.956 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:23.956 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.956 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:23.956 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:23.956 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:23.956 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:17:23.956 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:23.956 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:23.956 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:23.956 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI5OTQ0ZjYyZmU2ZGZlMjY5YTlkMjlkNmIzMjU2M2E3NTNhMDI1ZjFkZmI0YWQ4YTMzMTI3MmNlZTMyYmQyNsZ0dcc=: 00:17:23.956 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:23.957 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:23.957 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.957 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:23.957 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:23.957 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:23.957 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.957 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:23.957 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.957 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.957 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.957 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.957 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:23.957 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:23.957 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:23.957 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.957 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.957 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:23.957 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.957 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:23.957 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:23.957 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:23.957 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:23.957 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.957 17:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.524 nvme0n1 00:17:24.524 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.524 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.524 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.524 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.524 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: ]] 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.784 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.785 request: 00:17:24.785 { 00:17:24.785 "name": "nvme0", 00:17:24.785 "trtype": "tcp", 00:17:24.785 "traddr": "10.0.0.1", 00:17:24.785 "adrfam": "ipv4", 00:17:24.785 "trsvcid": "4420", 00:17:24.785 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:24.785 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:24.785 "prchk_reftag": false, 00:17:24.785 "prchk_guard": false, 00:17:24.785 "hdgst": false, 00:17:24.785 "ddgst": false, 00:17:24.785 "allow_unrecognized_csi": false, 00:17:24.785 "method": "bdev_nvme_attach_controller", 00:17:24.785 "req_id": 1 00:17:24.785 } 00:17:24.785 Got JSON-RPC error response 00:17:24.785 response: 00:17:24.785 { 00:17:24.785 "code": -5, 00:17:24.785 "message": "Input/output error" 00:17:24.785 } 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.785 request: 00:17:24.785 { 00:17:24.785 "name": "nvme0", 00:17:24.785 "trtype": "tcp", 00:17:24.785 "traddr": "10.0.0.1", 00:17:24.785 "adrfam": "ipv4", 00:17:24.785 "trsvcid": "4420", 00:17:24.785 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:24.785 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:24.785 "prchk_reftag": false, 00:17:24.785 "prchk_guard": false, 00:17:24.785 "hdgst": false, 00:17:24.785 "ddgst": false, 00:17:24.785 "dhchap_key": "key2", 00:17:24.785 "allow_unrecognized_csi": false, 00:17:24.785 "method": "bdev_nvme_attach_controller", 00:17:24.785 "req_id": 1 00:17:24.785 } 00:17:24.785 Got JSON-RPC error response 00:17:24.785 response: 00:17:24.785 { 00:17:24.785 "code": -5, 00:17:24.785 "message": "Input/output error" 00:17:24.785 } 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:24.785 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.045 request: 00:17:25.045 { 00:17:25.045 "name": "nvme0", 00:17:25.045 "trtype": "tcp", 00:17:25.045 "traddr": "10.0.0.1", 00:17:25.045 "adrfam": "ipv4", 00:17:25.045 "trsvcid": "4420", 00:17:25.045 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:25.045 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:25.045 "prchk_reftag": false, 00:17:25.045 "prchk_guard": false, 00:17:25.045 "hdgst": false, 00:17:25.045 "ddgst": false, 00:17:25.045 "dhchap_key": "key1", 00:17:25.045 "dhchap_ctrlr_key": "ckey2", 00:17:25.045 "allow_unrecognized_csi": false, 00:17:25.045 "method": "bdev_nvme_attach_controller", 00:17:25.045 "req_id": 1 00:17:25.045 } 00:17:25.045 Got JSON-RPC error response 00:17:25.045 response: 00:17:25.045 { 00:17:25.045 "code": -5, 00:17:25.045 "message": "Input/output error" 00:17:25.045 } 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.045 nvme0n1 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: ]] 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:25.045 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.046 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.046 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.046 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.046 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.046 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:17:25.046 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.046 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.046 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.046 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.046 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:25.046 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:25.046 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:25.046 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:25.046 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.046 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:25.046 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.046 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:25.046 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.046 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.304 request: 00:17:25.305 { 00:17:25.305 "name": "nvme0", 00:17:25.305 "dhchap_key": "key1", 00:17:25.305 "dhchap_ctrlr_key": "ckey2", 00:17:25.305 "method": "bdev_nvme_set_keys", 00:17:25.305 "req_id": 1 00:17:25.305 } 00:17:25.305 Got JSON-RPC error response 00:17:25.305 response: 00:17:25.305 { 00:17:25.305 "code": -13, 00:17:25.305 "message": "Permission denied" 00:17:25.305 } 00:17:25.305 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:25.305 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:25.305 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:25.305 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:25.305 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:25.305 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:25.305 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.305 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.305 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.305 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.305 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:17:25.305 17:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjY2RmZmU0YmFjZWVhYmYyNTAwNjAzZThkZmFiYjQ3NWNiNWVmZTI2Nzc3MWQ4eB7V5g==: 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: ]] 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZWNlZTZiZWU1OGJkNjFkYTZkNjRjMDRiZmZjNTU2MDBlZDg0YzQyYjkzODZlZHW4nQ==: 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.242 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.502 nvme0n1 00:17:26.502 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.502 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RhOTJkZGZjMDE4ZDc5M2E5YjM1YTRhNjE0MTE2ODJQDzdL: 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: ]] 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA0OGRhNzQwY2QxMzRmNDA1M2Y5ZDc2MWJiMDliNze6Fv+C: 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.503 request: 00:17:26.503 { 00:17:26.503 "name": "nvme0", 00:17:26.503 "dhchap_key": "key2", 00:17:26.503 "dhchap_ctrlr_key": "ckey1", 00:17:26.503 "method": "bdev_nvme_set_keys", 00:17:26.503 "req_id": 1 00:17:26.503 } 00:17:26.503 Got JSON-RPC error response 00:17:26.503 response: 00:17:26.503 { 00:17:26.503 "code": -13, 00:17:26.503 "message": "Permission denied" 00:17:26.503 } 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:17:26.503 17:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:17:27.439 17:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.439 17:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:27.439 17:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.439 17:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.439 17:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.698 17:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:17:27.698 17:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:17:27.698 17:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:17:27.698 17:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:27.698 17:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:27.698 17:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:17:27.698 17:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:27.698 17:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:17:27.698 17:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:27.698 17:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:27.698 rmmod nvme_tcp 00:17:27.698 rmmod nvme_fabrics 00:17:27.698 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:27.698 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:17:27.698 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:17:27.698 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78407 ']' 00:17:27.698 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78407 00:17:27.698 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78407 ']' 00:17:27.698 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78407 00:17:27.698 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:17:27.698 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.698 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78407 00:17:27.698 killing process with pid 78407 00:17:27.698 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:27.698 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:27.698 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78407' 00:17:27.698 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78407 00:17:27.698 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78407 00:17:27.957 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:27.957 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:27.957 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:27.957 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:17:27.957 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:17:27.957 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:27.957 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:17:27.957 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:27.957 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:27.957 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:27.957 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:27.957 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:27.957 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:27.957 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:27.957 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:27.957 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:27.957 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:27.957 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:27.957 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:27.957 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:27.957 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:27.957 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:28.216 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:28.216 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.216 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.216 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.216 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:17:28.216 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:28.216 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:28.216 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:28.216 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:28.216 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:17:28.216 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:28.216 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:28.216 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:28.216 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:28.216 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:17:28.216 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:17:28.216 17:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:28.784 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:29.042 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:29.042 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:29.042 17:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.L4y /tmp/spdk.key-null.H2P /tmp/spdk.key-sha256.KOB /tmp/spdk.key-sha384.DKE /tmp/spdk.key-sha512.sAD /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:29.042 17:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:29.610 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:29.611 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:29.611 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:29.611 00:17:29.611 real 0m38.619s 00:17:29.611 user 0m34.640s 00:17:29.611 sys 0m4.263s 00:17:29.611 17:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:29.611 17:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.611 ************************************ 00:17:29.611 END TEST nvmf_auth_host 00:17:29.611 ************************************ 00:17:29.611 17:02:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:17:29.611 17:02:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:29.611 17:02:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:29.611 17:02:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:29.611 17:02:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.611 ************************************ 00:17:29.611 START TEST nvmf_digest 00:17:29.611 ************************************ 00:17:29.611 17:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:29.611 * Looking for test storage... 00:17:29.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:29.611 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:29.611 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:17:29.611 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:29.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.871 --rc genhtml_branch_coverage=1 00:17:29.871 --rc genhtml_function_coverage=1 00:17:29.871 --rc genhtml_legend=1 00:17:29.871 --rc geninfo_all_blocks=1 00:17:29.871 --rc geninfo_unexecuted_blocks=1 00:17:29.871 00:17:29.871 ' 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:29.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.871 --rc genhtml_branch_coverage=1 00:17:29.871 --rc genhtml_function_coverage=1 00:17:29.871 --rc genhtml_legend=1 00:17:29.871 --rc geninfo_all_blocks=1 00:17:29.871 --rc geninfo_unexecuted_blocks=1 00:17:29.871 00:17:29.871 ' 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:29.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.871 --rc genhtml_branch_coverage=1 00:17:29.871 --rc genhtml_function_coverage=1 00:17:29.871 --rc genhtml_legend=1 00:17:29.871 --rc geninfo_all_blocks=1 00:17:29.871 --rc geninfo_unexecuted_blocks=1 00:17:29.871 00:17:29.871 ' 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:29.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.871 --rc genhtml_branch_coverage=1 00:17:29.871 --rc genhtml_function_coverage=1 00:17:29.871 --rc genhtml_legend=1 00:17:29.871 --rc geninfo_all_blocks=1 00:17:29.871 --rc geninfo_unexecuted_blocks=1 00:17:29.871 00:17:29.871 ' 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.871 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:29.872 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:29.872 Cannot find device "nvmf_init_br" 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:29.872 Cannot find device "nvmf_init_br2" 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:29.872 Cannot find device "nvmf_tgt_br" 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:29.872 Cannot find device "nvmf_tgt_br2" 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:29.872 Cannot find device "nvmf_init_br" 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:29.872 Cannot find device "nvmf_init_br2" 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:29.872 Cannot find device "nvmf_tgt_br" 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:29.872 Cannot find device "nvmf_tgt_br2" 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:29.872 Cannot find device "nvmf_br" 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:29.872 Cannot find device "nvmf_init_if" 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:29.872 Cannot find device "nvmf_init_if2" 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:29.872 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:29.872 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:17:29.873 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:29.873 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:29.873 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:17:29.873 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:29.873 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:29.873 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:29.873 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:29.873 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:30.145 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:30.145 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.119 ms 00:17:30.145 00:17:30.145 --- 10.0.0.3 ping statistics --- 00:17:30.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.145 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:30.145 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:30.145 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.031 ms 00:17:30.145 00:17:30.145 --- 10.0.0.4 ping statistics --- 00:17:30.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.145 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:30.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:17:30.145 00:17:30.145 --- 10.0.0.1 ping statistics --- 00:17:30.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.145 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:30.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:17:30.145 00:17:30.145 --- 10.0.0.2 ping statistics --- 00:17:30.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.145 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:30.145 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.146 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:17:30.146 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:30.146 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.146 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:30.146 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:30.146 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.146 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:30.146 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:30.146 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:30.146 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:30.146 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:30.146 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:30.146 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.146 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:30.417 ************************************ 00:17:30.417 START TEST nvmf_digest_clean 00:17:30.417 ************************************ 00:17:30.417 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:17:30.417 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:30.417 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:30.417 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:30.417 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:30.417 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:30.417 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:30.417 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:30.417 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:30.417 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=80065 00:17:30.417 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 80065 00:17:30.417 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:30.417 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80065 ']' 00:17:30.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.417 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.417 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.417 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.417 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.417 17:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:30.417 [2024-11-20 17:02:48.748977] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:17:30.417 [2024-11-20 17:02:48.749054] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.417 [2024-11-20 17:02:48.903417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.677 [2024-11-20 17:02:48.965237] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.677 [2024-11-20 17:02:48.965318] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.677 [2024-11-20 17:02:48.965346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.677 [2024-11-20 17:02:48.965356] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.677 [2024-11-20 17:02:48.965365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.677 [2024-11-20 17:02:48.965881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.677 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.677 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:30.677 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:30.677 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:30.677 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:30.677 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.677 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:30.677 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:30.677 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:30.677 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.677 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:30.677 [2024-11-20 17:02:49.140305] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:30.677 null0 00:17:30.677 [2024-11-20 17:02:49.198243] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.935 [2024-11-20 17:02:49.222417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:30.935 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.935 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:30.935 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:30.935 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:30.935 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:30.935 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:30.935 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:30.935 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:30.935 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80086 00:17:30.935 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:30.935 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80086 /var/tmp/bperf.sock 00:17:30.935 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80086 ']' 00:17:30.935 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:30.935 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.935 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:30.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:30.935 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.935 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:30.935 [2024-11-20 17:02:49.290452] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:17:30.935 [2024-11-20 17:02:49.290800] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80086 ] 00:17:30.935 [2024-11-20 17:02:49.445691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.194 [2024-11-20 17:02:49.507049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.194 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.194 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:31.194 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:31.194 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:31.194 17:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:31.453 [2024-11-20 17:02:49.964509] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:31.716 17:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:31.716 17:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:31.974 nvme0n1 00:17:31.974 17:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:31.975 17:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:32.234 Running I/O for 2 seconds... 00:17:34.109 14605.00 IOPS, 57.05 MiB/s [2024-11-20T17:02:52.645Z] 15811.50 IOPS, 61.76 MiB/s 00:17:34.109 Latency(us) 00:17:34.109 [2024-11-20T17:02:52.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.109 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:34.109 nvme0n1 : 2.01 15844.39 61.89 0.00 0.00 8072.54 6791.91 22282.24 00:17:34.109 [2024-11-20T17:02:52.645Z] =================================================================================================================== 00:17:34.109 [2024-11-20T17:02:52.645Z] Total : 15844.39 61.89 0.00 0.00 8072.54 6791.91 22282.24 00:17:34.109 { 00:17:34.109 "results": [ 00:17:34.109 { 00:17:34.109 "job": "nvme0n1", 00:17:34.109 "core_mask": "0x2", 00:17:34.109 "workload": "randread", 00:17:34.109 "status": "finished", 00:17:34.109 "queue_depth": 128, 00:17:34.109 "io_size": 4096, 00:17:34.109 "runtime": 2.011943, 00:17:34.109 "iops": 15844.38525345897, 00:17:34.109 "mibps": 61.8921298963241, 00:17:34.109 "io_failed": 0, 00:17:34.109 "io_timeout": 0, 00:17:34.109 "avg_latency_us": 8072.536662388995, 00:17:34.109 "min_latency_us": 6791.912727272727, 00:17:34.109 "max_latency_us": 22282.24 00:17:34.109 } 00:17:34.109 ], 00:17:34.109 "core_count": 1 00:17:34.109 } 00:17:34.109 17:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:34.109 17:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:34.109 17:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:34.109 | select(.opcode=="crc32c") 00:17:34.109 | "\(.module_name) \(.executed)"' 00:17:34.109 17:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:34.109 17:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:34.368 17:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:34.368 17:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:34.368 17:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:34.368 17:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:34.368 17:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80086 00:17:34.368 17:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80086 ']' 00:17:34.368 17:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80086 00:17:34.368 17:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:34.368 17:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.368 17:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80086 00:17:34.368 killing process with pid 80086 00:17:34.368 Received shutdown signal, test time was about 2.000000 seconds 00:17:34.369 00:17:34.369 Latency(us) 00:17:34.369 [2024-11-20T17:02:52.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.369 [2024-11-20T17:02:52.905Z] =================================================================================================================== 00:17:34.369 [2024-11-20T17:02:52.905Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:34.369 17:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:34.369 17:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:34.369 17:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80086' 00:17:34.369 17:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80086 00:17:34.369 17:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80086 00:17:34.628 17:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:34.628 17:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:34.628 17:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:34.628 17:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:34.628 17:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:34.628 17:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:34.628 17:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:34.628 17:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:34.628 17:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80139 00:17:34.628 17:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80139 /var/tmp/bperf.sock 00:17:34.628 17:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80139 ']' 00:17:34.628 17:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:34.628 17:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.628 17:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:34.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:34.628 17:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.628 17:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:34.628 [2024-11-20 17:02:53.099536] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:17:34.628 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:34.628 Zero copy mechanism will not be used. 00:17:34.628 [2024-11-20 17:02:53.100013] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80139 ] 00:17:34.888 [2024-11-20 17:02:53.245496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.888 [2024-11-20 17:02:53.300630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.888 17:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.888 17:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:34.888 17:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:34.888 17:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:34.888 17:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:35.147 [2024-11-20 17:02:53.632099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:35.406 17:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:35.406 17:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:35.666 nvme0n1 00:17:35.666 17:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:35.666 17:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:35.924 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:35.924 Zero copy mechanism will not be used. 00:17:35.924 Running I/O for 2 seconds... 00:17:37.829 6848.00 IOPS, 856.00 MiB/s [2024-11-20T17:02:56.365Z] 7016.00 IOPS, 877.00 MiB/s 00:17:37.829 Latency(us) 00:17:37.829 [2024-11-20T17:02:56.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.829 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:37.829 nvme0n1 : 2.00 7012.76 876.59 0.00 0.00 2277.95 1980.97 8102.63 00:17:37.829 [2024-11-20T17:02:56.365Z] =================================================================================================================== 00:17:37.829 [2024-11-20T17:02:56.365Z] Total : 7012.76 876.59 0.00 0.00 2277.95 1980.97 8102.63 00:17:37.829 { 00:17:37.829 "results": [ 00:17:37.829 { 00:17:37.829 "job": "nvme0n1", 00:17:37.829 "core_mask": "0x2", 00:17:37.829 "workload": "randread", 00:17:37.829 "status": "finished", 00:17:37.829 "queue_depth": 16, 00:17:37.829 "io_size": 131072, 00:17:37.829 "runtime": 2.003207, 00:17:37.829 "iops": 7012.755047281684, 00:17:37.829 "mibps": 876.5943809102105, 00:17:37.829 "io_failed": 0, 00:17:37.829 "io_timeout": 0, 00:17:37.829 "avg_latency_us": 2277.946208324705, 00:17:37.829 "min_latency_us": 1980.9745454545455, 00:17:37.829 "max_latency_us": 8102.632727272728 00:17:37.829 } 00:17:37.829 ], 00:17:37.829 "core_count": 1 00:17:37.829 } 00:17:37.829 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:37.829 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:37.829 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:37.829 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:37.829 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:37.829 | select(.opcode=="crc32c") 00:17:37.829 | "\(.module_name) \(.executed)"' 00:17:38.095 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:38.095 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:38.095 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:38.095 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:38.095 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80139 00:17:38.095 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80139 ']' 00:17:38.095 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80139 00:17:38.095 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:38.095 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:38.095 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80139 00:17:38.095 killing process with pid 80139 00:17:38.095 Received shutdown signal, test time was about 2.000000 seconds 00:17:38.095 00:17:38.095 Latency(us) 00:17:38.095 [2024-11-20T17:02:56.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.095 [2024-11-20T17:02:56.631Z] =================================================================================================================== 00:17:38.095 [2024-11-20T17:02:56.631Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:38.095 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:38.095 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:38.095 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80139' 00:17:38.095 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80139 00:17:38.095 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80139 00:17:38.354 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:38.354 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:38.354 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:38.354 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:38.354 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:38.354 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:38.354 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:38.354 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80192 00:17:38.354 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80192 /var/tmp/bperf.sock 00:17:38.354 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:38.354 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80192 ']' 00:17:38.354 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:38.354 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.354 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:38.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:38.354 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.354 17:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:38.354 [2024-11-20 17:02:56.882871] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:17:38.354 [2024-11-20 17:02:56.883168] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80192 ] 00:17:38.614 [2024-11-20 17:02:57.033577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.614 [2024-11-20 17:02:57.094785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.552 17:02:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.552 17:02:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:39.552 17:02:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:39.552 17:02:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:39.552 17:02:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:39.812 [2024-11-20 17:02:58.293502] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:40.071 17:02:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:40.071 17:02:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:40.330 nvme0n1 00:17:40.330 17:02:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:40.330 17:02:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:40.590 Running I/O for 2 seconds... 00:17:42.460 18035.00 IOPS, 70.45 MiB/s [2024-11-20T17:03:00.996Z] 17653.50 IOPS, 68.96 MiB/s 00:17:42.460 Latency(us) 00:17:42.460 [2024-11-20T17:03:00.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.460 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:42.460 nvme0n1 : 2.00 17684.94 69.08 0.00 0.00 7231.94 6374.87 15192.44 00:17:42.460 [2024-11-20T17:03:00.996Z] =================================================================================================================== 00:17:42.460 [2024-11-20T17:03:00.996Z] Total : 17684.94 69.08 0.00 0.00 7231.94 6374.87 15192.44 00:17:42.460 { 00:17:42.460 "results": [ 00:17:42.460 { 00:17:42.460 "job": "nvme0n1", 00:17:42.460 "core_mask": "0x2", 00:17:42.460 "workload": "randwrite", 00:17:42.460 "status": "finished", 00:17:42.460 "queue_depth": 128, 00:17:42.460 "io_size": 4096, 00:17:42.460 "runtime": 2.003682, 00:17:42.460 "iops": 17684.94202173798, 00:17:42.460 "mibps": 69.08180477241399, 00:17:42.460 "io_failed": 0, 00:17:42.460 "io_timeout": 0, 00:17:42.460 "avg_latency_us": 7231.9413567992615, 00:17:42.460 "min_latency_us": 6374.865454545455, 00:17:42.460 "max_latency_us": 15192.436363636363 00:17:42.460 } 00:17:42.460 ], 00:17:42.460 "core_count": 1 00:17:42.460 } 00:17:42.460 17:03:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:42.460 17:03:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:42.460 17:03:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:42.460 17:03:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:42.460 17:03:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:42.460 | select(.opcode=="crc32c") 00:17:42.460 | "\(.module_name) \(.executed)"' 00:17:42.720 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:42.720 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:42.720 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:42.720 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:42.720 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80192 00:17:42.720 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80192 ']' 00:17:42.720 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80192 00:17:42.720 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:42.720 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:42.720 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80192 00:17:42.720 killing process with pid 80192 00:17:42.720 Received shutdown signal, test time was about 2.000000 seconds 00:17:42.720 00:17:42.720 Latency(us) 00:17:42.720 [2024-11-20T17:03:01.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.720 [2024-11-20T17:03:01.256Z] =================================================================================================================== 00:17:42.720 [2024-11-20T17:03:01.256Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:42.720 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:42.720 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:42.720 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80192' 00:17:42.720 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80192 00:17:42.720 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80192 00:17:42.979 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:42.979 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:42.979 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:42.979 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:42.979 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:42.979 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:42.979 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:42.979 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80253 00:17:42.979 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:42.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:42.979 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80253 /var/tmp/bperf.sock 00:17:42.979 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80253 ']' 00:17:42.979 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:42.979 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:42.979 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:42.980 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:42.980 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:42.980 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:42.980 Zero copy mechanism will not be used. 00:17:42.980 [2024-11-20 17:03:01.472162] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:17:42.980 [2024-11-20 17:03:01.472280] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80253 ] 00:17:43.238 [2024-11-20 17:03:01.617494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.238 [2024-11-20 17:03:01.671006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.238 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.238 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:43.238 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:43.238 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:43.238 17:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:43.497 [2024-11-20 17:03:01.998062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:43.755 17:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:43.755 17:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:44.014 nvme0n1 00:17:44.014 17:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:44.014 17:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:44.014 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:44.014 Zero copy mechanism will not be used. 00:17:44.014 Running I/O for 2 seconds... 00:17:46.336 6779.00 IOPS, 847.38 MiB/s [2024-11-20T17:03:04.872Z] 7329.00 IOPS, 916.12 MiB/s 00:17:46.336 Latency(us) 00:17:46.336 [2024-11-20T17:03:04.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.336 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:46.336 nvme0n1 : 2.00 7325.40 915.68 0.00 0.00 2179.12 1630.95 8817.57 00:17:46.336 [2024-11-20T17:03:04.872Z] =================================================================================================================== 00:17:46.336 [2024-11-20T17:03:04.872Z] Total : 7325.40 915.68 0.00 0.00 2179.12 1630.95 8817.57 00:17:46.336 { 00:17:46.336 "results": [ 00:17:46.336 { 00:17:46.336 "job": "nvme0n1", 00:17:46.336 "core_mask": "0x2", 00:17:46.336 "workload": "randwrite", 00:17:46.336 "status": "finished", 00:17:46.336 "queue_depth": 16, 00:17:46.336 "io_size": 131072, 00:17:46.336 "runtime": 2.003167, 00:17:46.336 "iops": 7325.400228737793, 00:17:46.336 "mibps": 915.6750285922242, 00:17:46.336 "io_failed": 0, 00:17:46.336 "io_timeout": 0, 00:17:46.336 "avg_latency_us": 2179.1230559926653, 00:17:46.336 "min_latency_us": 1630.9527272727273, 00:17:46.336 "max_latency_us": 8817.57090909091 00:17:46.336 } 00:17:46.336 ], 00:17:46.336 "core_count": 1 00:17:46.336 } 00:17:46.336 17:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:46.336 17:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:46.336 17:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:46.336 17:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:46.336 | select(.opcode=="crc32c") 00:17:46.336 | "\(.module_name) \(.executed)"' 00:17:46.336 17:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:46.336 17:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:46.336 17:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:46.336 17:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:46.336 17:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:46.336 17:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80253 00:17:46.336 17:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80253 ']' 00:17:46.336 17:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80253 00:17:46.336 17:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:46.336 17:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:46.336 17:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80253 00:17:46.336 killing process with pid 80253 00:17:46.336 Received shutdown signal, test time was about 2.000000 seconds 00:17:46.336 00:17:46.336 Latency(us) 00:17:46.336 [2024-11-20T17:03:04.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.336 [2024-11-20T17:03:04.872Z] =================================================================================================================== 00:17:46.336 [2024-11-20T17:03:04.872Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:46.336 17:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:46.336 17:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:46.336 17:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80253' 00:17:46.336 17:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80253 00:17:46.336 17:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80253 00:17:46.595 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80065 00:17:46.595 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80065 ']' 00:17:46.595 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80065 00:17:46.596 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:46.596 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:46.596 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80065 00:17:46.596 killing process with pid 80065 00:17:46.596 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:46.596 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:46.596 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80065' 00:17:46.596 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80065 00:17:46.596 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80065 00:17:46.854 00:17:46.854 real 0m16.569s 00:17:46.854 user 0m32.709s 00:17:46.854 sys 0m4.667s 00:17:46.854 ************************************ 00:17:46.854 END TEST nvmf_digest_clean 00:17:46.854 ************************************ 00:17:46.854 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.854 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:46.854 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:46.854 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:46.854 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:46.854 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:46.854 ************************************ 00:17:46.854 START TEST nvmf_digest_error 00:17:46.854 ************************************ 00:17:46.854 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:17:46.854 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:46.854 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:46.854 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:46.855 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:46.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.855 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80329 00:17:46.855 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80329 00:17:46.855 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:46.855 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80329 ']' 00:17:46.855 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.855 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.855 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.855 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.855 17:03:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:46.855 [2024-11-20 17:03:05.360385] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:17:46.855 [2024-11-20 17:03:05.360670] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.113 [2024-11-20 17:03:05.499381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.113 [2024-11-20 17:03:05.557327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.113 [2024-11-20 17:03:05.557380] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.113 [2024-11-20 17:03:05.557406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.113 [2024-11-20 17:03:05.557414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.113 [2024-11-20 17:03:05.557420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.113 [2024-11-20 17:03:05.557786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.049 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.049 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:48.050 [2024-11-20 17:03:06.402395] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:48.050 [2024-11-20 17:03:06.468061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:48.050 null0 00:17:48.050 [2024-11-20 17:03:06.523479] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.050 [2024-11-20 17:03:06.547692] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80361 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80361 /var/tmp/bperf.sock 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80361 ']' 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:48.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.050 17:03:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:48.309 [2024-11-20 17:03:06.614707] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:17:48.309 [2024-11-20 17:03:06.615029] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80361 ] 00:17:48.309 [2024-11-20 17:03:06.768475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.309 [2024-11-20 17:03:06.830813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.567 [2024-11-20 17:03:06.889989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:49.135 17:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.135 17:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:49.135 17:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:49.135 17:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:49.393 17:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:49.393 17:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.393 17:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:49.393 17:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.393 17:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:49.393 17:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:49.961 nvme0n1 00:17:49.961 17:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:49.961 17:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.961 17:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:49.961 17:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.961 17:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:49.961 17:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:49.961 Running I/O for 2 seconds... 00:17:49.961 [2024-11-20 17:03:08.387288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:49.961 [2024-11-20 17:03:08.387356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.961 [2024-11-20 17:03:08.387389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.961 [2024-11-20 17:03:08.403464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:49.961 [2024-11-20 17:03:08.403718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.961 [2024-11-20 17:03:08.403753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.961 [2024-11-20 17:03:08.419356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:49.961 [2024-11-20 17:03:08.419396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.961 [2024-11-20 17:03:08.419425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.961 [2024-11-20 17:03:08.435023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:49.961 [2024-11-20 17:03:08.435062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.961 [2024-11-20 17:03:08.435092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.961 [2024-11-20 17:03:08.450756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:49.961 [2024-11-20 17:03:08.450794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.961 [2024-11-20 17:03:08.450822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.961 [2024-11-20 17:03:08.466189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:49.961 [2024-11-20 17:03:08.466226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.961 [2024-11-20 17:03:08.466254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.961 [2024-11-20 17:03:08.481754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:49.961 [2024-11-20 17:03:08.481791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.961 [2024-11-20 17:03:08.481819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.221 [2024-11-20 17:03:08.497372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.221 [2024-11-20 17:03:08.497411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.221 [2024-11-20 17:03:08.497441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.221 [2024-11-20 17:03:08.515396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.221 [2024-11-20 17:03:08.515467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.221 [2024-11-20 17:03:08.515483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.221 [2024-11-20 17:03:08.532890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.221 [2024-11-20 17:03:08.533138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.221 [2024-11-20 17:03:08.533157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.221 [2024-11-20 17:03:08.549086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.221 [2024-11-20 17:03:08.549124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.221 [2024-11-20 17:03:08.549154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.221 [2024-11-20 17:03:08.565000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.221 [2024-11-20 17:03:08.565042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.221 [2024-11-20 17:03:08.565071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.221 [2024-11-20 17:03:08.581361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.221 [2024-11-20 17:03:08.581428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.221 [2024-11-20 17:03:08.581467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.221 [2024-11-20 17:03:08.597319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.221 [2024-11-20 17:03:08.597364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.221 [2024-11-20 17:03:08.597394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.221 [2024-11-20 17:03:08.612957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.221 [2024-11-20 17:03:08.612995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.221 [2024-11-20 17:03:08.613024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.221 [2024-11-20 17:03:08.628449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.221 [2024-11-20 17:03:08.628644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.221 [2024-11-20 17:03:08.628678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.221 [2024-11-20 17:03:08.644387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.221 [2024-11-20 17:03:08.644425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.221 [2024-11-20 17:03:08.644454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.221 [2024-11-20 17:03:08.659961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.221 [2024-11-20 17:03:08.660161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.221 [2024-11-20 17:03:08.660195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.221 [2024-11-20 17:03:08.675788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.222 [2024-11-20 17:03:08.676037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.222 [2024-11-20 17:03:08.676056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.222 [2024-11-20 17:03:08.691912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.222 [2024-11-20 17:03:08.691977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.222 [2024-11-20 17:03:08.692007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.222 [2024-11-20 17:03:08.708381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.222 [2024-11-20 17:03:08.708432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.222 [2024-11-20 17:03:08.708447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.222 [2024-11-20 17:03:08.726037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.222 [2024-11-20 17:03:08.726077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.222 [2024-11-20 17:03:08.726091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.222 [2024-11-20 17:03:08.743404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.222 [2024-11-20 17:03:08.743608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.222 [2024-11-20 17:03:08.743641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.481 [2024-11-20 17:03:08.759933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.481 [2024-11-20 17:03:08.759970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.481 [2024-11-20 17:03:08.759999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.481 [2024-11-20 17:03:08.775736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.481 [2024-11-20 17:03:08.775774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.481 [2024-11-20 17:03:08.775802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.481 [2024-11-20 17:03:08.791219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.481 [2024-11-20 17:03:08.791434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.481 [2024-11-20 17:03:08.791467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.481 [2024-11-20 17:03:08.807060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.481 [2024-11-20 17:03:08.807099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.481 [2024-11-20 17:03:08.807128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.481 [2024-11-20 17:03:08.822532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.481 [2024-11-20 17:03:08.822729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.481 [2024-11-20 17:03:08.822762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.481 [2024-11-20 17:03:08.838507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.481 [2024-11-20 17:03:08.838546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.481 [2024-11-20 17:03:08.838575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.481 [2024-11-20 17:03:08.854244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.481 [2024-11-20 17:03:08.854282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.481 [2024-11-20 17:03:08.854311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.481 [2024-11-20 17:03:08.869782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.481 [2024-11-20 17:03:08.869819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.481 [2024-11-20 17:03:08.869878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.481 [2024-11-20 17:03:08.885245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.481 [2024-11-20 17:03:08.885281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.481 [2024-11-20 17:03:08.885310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.481 [2024-11-20 17:03:08.901004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.481 [2024-11-20 17:03:08.901040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.481 [2024-11-20 17:03:08.901068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.481 [2024-11-20 17:03:08.916623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.481 [2024-11-20 17:03:08.916667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.481 [2024-11-20 17:03:08.916696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.481 [2024-11-20 17:03:08.932707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.481 [2024-11-20 17:03:08.932755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.481 [2024-11-20 17:03:08.932784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.481 [2024-11-20 17:03:08.948323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.481 [2024-11-20 17:03:08.948361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.481 [2024-11-20 17:03:08.948390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.481 [2024-11-20 17:03:08.963947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.481 [2024-11-20 17:03:08.963983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.481 [2024-11-20 17:03:08.964012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.481 [2024-11-20 17:03:08.979666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.481 [2024-11-20 17:03:08.979705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.481 [2024-11-20 17:03:08.979735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.481 [2024-11-20 17:03:08.994913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.481 [2024-11-20 17:03:08.994946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.481 [2024-11-20 17:03:08.994958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.481 [2024-11-20 17:03:09.011139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.481 [2024-11-20 17:03:09.011178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.481 [2024-11-20 17:03:09.011192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.743 [2024-11-20 17:03:09.028421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.743 [2024-11-20 17:03:09.028481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.743 [2024-11-20 17:03:09.028511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.743 [2024-11-20 17:03:09.044489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.743 [2024-11-20 17:03:09.044542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.743 [2024-11-20 17:03:09.044570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.744 [2024-11-20 17:03:09.059771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.744 [2024-11-20 17:03:09.059808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.744 [2024-11-20 17:03:09.059836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.744 [2024-11-20 17:03:09.075070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.744 [2024-11-20 17:03:09.075106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.744 [2024-11-20 17:03:09.075134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.744 [2024-11-20 17:03:09.090319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.744 [2024-11-20 17:03:09.090562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.745 [2024-11-20 17:03:09.090596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.745 [2024-11-20 17:03:09.105925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.745 [2024-11-20 17:03:09.105960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.745 [2024-11-20 17:03:09.105989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.745 [2024-11-20 17:03:09.121015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.745 [2024-11-20 17:03:09.121050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.745 [2024-11-20 17:03:09.121078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.745 [2024-11-20 17:03:09.136175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.745 [2024-11-20 17:03:09.136353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.745 [2024-11-20 17:03:09.136386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.745 [2024-11-20 17:03:09.152344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.746 [2024-11-20 17:03:09.152384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.746 [2024-11-20 17:03:09.152413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.746 [2024-11-20 17:03:09.168642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.746 [2024-11-20 17:03:09.168711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.746 [2024-11-20 17:03:09.168739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.746 [2024-11-20 17:03:09.185723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.746 [2024-11-20 17:03:09.185763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.746 [2024-11-20 17:03:09.185793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.746 [2024-11-20 17:03:09.203120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.747 [2024-11-20 17:03:09.203157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.747 [2024-11-20 17:03:09.203184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.747 [2024-11-20 17:03:09.221035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.747 [2024-11-20 17:03:09.221104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.747 [2024-11-20 17:03:09.221133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.747 [2024-11-20 17:03:09.238885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.747 [2024-11-20 17:03:09.238965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.747 [2024-11-20 17:03:09.238979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.747 [2024-11-20 17:03:09.256986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.748 [2024-11-20 17:03:09.257039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.748 [2024-11-20 17:03:09.257068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.748 [2024-11-20 17:03:09.274863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:50.748 [2024-11-20 17:03:09.274927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.748 [2024-11-20 17:03:09.274957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.051 [2024-11-20 17:03:09.291811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.051 [2024-11-20 17:03:09.291873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.051 [2024-11-20 17:03:09.291903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.051 [2024-11-20 17:03:09.308537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.051 [2024-11-20 17:03:09.308609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.051 [2024-11-20 17:03:09.308637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.051 [2024-11-20 17:03:09.324675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.051 [2024-11-20 17:03:09.324735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.051 [2024-11-20 17:03:09.324764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.051 [2024-11-20 17:03:09.340985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.051 [2024-11-20 17:03:09.341041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.051 [2024-11-20 17:03:09.341070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.051 [2024-11-20 17:03:09.357378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.051 [2024-11-20 17:03:09.357417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.051 [2024-11-20 17:03:09.357446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.051 15560.00 IOPS, 60.78 MiB/s [2024-11-20T17:03:09.587Z] [2024-11-20 17:03:09.373942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.051 [2024-11-20 17:03:09.374151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.051 [2024-11-20 17:03:09.374184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.051 [2024-11-20 17:03:09.389633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.051 [2024-11-20 17:03:09.389889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.051 [2024-11-20 17:03:09.390115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.051 [2024-11-20 17:03:09.413926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.051 [2024-11-20 17:03:09.414155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.051 [2024-11-20 17:03:09.414361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.051 [2024-11-20 17:03:09.429927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.052 [2024-11-20 17:03:09.430126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.052 [2024-11-20 17:03:09.430358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.052 [2024-11-20 17:03:09.445522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.052 [2024-11-20 17:03:09.445732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.052 [2024-11-20 17:03:09.445891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.052 [2024-11-20 17:03:09.461061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.052 [2024-11-20 17:03:09.461256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.052 [2024-11-20 17:03:09.461400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.052 [2024-11-20 17:03:09.476879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.052 [2024-11-20 17:03:09.477072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.052 [2024-11-20 17:03:09.477211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.052 [2024-11-20 17:03:09.492215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.052 [2024-11-20 17:03:09.492457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.052 [2024-11-20 17:03:09.492603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.052 [2024-11-20 17:03:09.508351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.052 [2024-11-20 17:03:09.508568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.052 [2024-11-20 17:03:09.508703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.052 [2024-11-20 17:03:09.525992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.052 [2024-11-20 17:03:09.526200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.052 [2024-11-20 17:03:09.526316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.052 [2024-11-20 17:03:09.543975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.052 [2024-11-20 17:03:09.544016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.052 [2024-11-20 17:03:09.544044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.310 [2024-11-20 17:03:09.560728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.311 [2024-11-20 17:03:09.560780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.311 [2024-11-20 17:03:09.560809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.311 [2024-11-20 17:03:09.576072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.311 [2024-11-20 17:03:09.576321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.311 [2024-11-20 17:03:09.576339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.311 [2024-11-20 17:03:09.591293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.311 [2024-11-20 17:03:09.591330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.311 [2024-11-20 17:03:09.591357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.311 [2024-11-20 17:03:09.605969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.311 [2024-11-20 17:03:09.606163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.311 [2024-11-20 17:03:09.606196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.311 [2024-11-20 17:03:09.620936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.311 [2024-11-20 17:03:09.620973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.311 [2024-11-20 17:03:09.621001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.311 [2024-11-20 17:03:09.635475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.311 [2024-11-20 17:03:09.635512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.311 [2024-11-20 17:03:09.635540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.311 [2024-11-20 17:03:09.650512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.311 [2024-11-20 17:03:09.650704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.311 [2024-11-20 17:03:09.650736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.311 [2024-11-20 17:03:09.665931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.311 [2024-11-20 17:03:09.665968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.311 [2024-11-20 17:03:09.665995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.311 [2024-11-20 17:03:09.680801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.311 [2024-11-20 17:03:09.681035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.311 [2024-11-20 17:03:09.681067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.311 [2024-11-20 17:03:09.695916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.311 [2024-11-20 17:03:09.696105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.311 [2024-11-20 17:03:09.696136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.311 [2024-11-20 17:03:09.711134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.311 [2024-11-20 17:03:09.711349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.311 [2024-11-20 17:03:09.711381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.311 [2024-11-20 17:03:09.726381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.311 [2024-11-20 17:03:09.726573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.311 [2024-11-20 17:03:09.726606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.311 [2024-11-20 17:03:09.743317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.311 [2024-11-20 17:03:09.743353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.311 [2024-11-20 17:03:09.743381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.311 [2024-11-20 17:03:09.760480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.311 [2024-11-20 17:03:09.760699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.311 [2024-11-20 17:03:09.760727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.311 [2024-11-20 17:03:09.777223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.311 [2024-11-20 17:03:09.777522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.311 [2024-11-20 17:03:09.777541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.311 [2024-11-20 17:03:09.792951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.311 [2024-11-20 17:03:09.793002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.311 [2024-11-20 17:03:09.793032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.311 [2024-11-20 17:03:09.807750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.311 [2024-11-20 17:03:09.808013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.311 [2024-11-20 17:03:09.808047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.311 [2024-11-20 17:03:09.822666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.311 [2024-11-20 17:03:09.822704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.311 [2024-11-20 17:03:09.822731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.311 [2024-11-20 17:03:09.837373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.311 [2024-11-20 17:03:09.837409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.311 [2024-11-20 17:03:09.837437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.570 [2024-11-20 17:03:09.851935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.570 [2024-11-20 17:03:09.851969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.570 [2024-11-20 17:03:09.851997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.570 [2024-11-20 17:03:09.866729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.570 [2024-11-20 17:03:09.866952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.570 [2024-11-20 17:03:09.866984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.570 [2024-11-20 17:03:09.881500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.570 [2024-11-20 17:03:09.881537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.570 [2024-11-20 17:03:09.881564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.570 [2024-11-20 17:03:09.896256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.570 [2024-11-20 17:03:09.896473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.570 [2024-11-20 17:03:09.896491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.570 [2024-11-20 17:03:09.911412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.570 [2024-11-20 17:03:09.911694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.571 [2024-11-20 17:03:09.911714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.571 [2024-11-20 17:03:09.926566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.571 [2024-11-20 17:03:09.926776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.571 [2024-11-20 17:03:09.926793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.571 [2024-11-20 17:03:09.941604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.571 [2024-11-20 17:03:09.941640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.571 [2024-11-20 17:03:09.941669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.571 [2024-11-20 17:03:09.956303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.571 [2024-11-20 17:03:09.956496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.571 [2024-11-20 17:03:09.956543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.571 [2024-11-20 17:03:09.971253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.571 [2024-11-20 17:03:09.971442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.571 [2024-11-20 17:03:09.971473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.571 [2024-11-20 17:03:09.986211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.571 [2024-11-20 17:03:09.986439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.571 [2024-11-20 17:03:09.986471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.571 [2024-11-20 17:03:10.001410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.571 [2024-11-20 17:03:10.001446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.571 [2024-11-20 17:03:10.001474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.571 [2024-11-20 17:03:10.017217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.571 [2024-11-20 17:03:10.017254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.571 [2024-11-20 17:03:10.017298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.571 [2024-11-20 17:03:10.033960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.571 [2024-11-20 17:03:10.033997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.571 [2024-11-20 17:03:10.034026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.571 [2024-11-20 17:03:10.050149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.571 [2024-11-20 17:03:10.050185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.571 [2024-11-20 17:03:10.050215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.571 [2024-11-20 17:03:10.066127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.571 [2024-11-20 17:03:10.066163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.571 [2024-11-20 17:03:10.066190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.571 [2024-11-20 17:03:10.081867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.571 [2024-11-20 17:03:10.081903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.571 [2024-11-20 17:03:10.081931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.571 [2024-11-20 17:03:10.097559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.571 [2024-11-20 17:03:10.097596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.571 [2024-11-20 17:03:10.097624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.831 [2024-11-20 17:03:10.113243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.831 [2024-11-20 17:03:10.113280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.831 [2024-11-20 17:03:10.113308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.831 [2024-11-20 17:03:10.128991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.831 [2024-11-20 17:03:10.129027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.831 [2024-11-20 17:03:10.129054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.831 [2024-11-20 17:03:10.144667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.831 [2024-11-20 17:03:10.144898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.831 [2024-11-20 17:03:10.144930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.831 [2024-11-20 17:03:10.160659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.831 [2024-11-20 17:03:10.160873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.831 [2024-11-20 17:03:10.160892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.831 [2024-11-20 17:03:10.176754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.831 [2024-11-20 17:03:10.176995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.831 [2024-11-20 17:03:10.177118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.831 [2024-11-20 17:03:10.193377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.831 [2024-11-20 17:03:10.193595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.831 [2024-11-20 17:03:10.193776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.831 [2024-11-20 17:03:10.210366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.831 [2024-11-20 17:03:10.210570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.831 [2024-11-20 17:03:10.210694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.831 [2024-11-20 17:03:10.228090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.831 [2024-11-20 17:03:10.228323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.831 [2024-11-20 17:03:10.228457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.831 [2024-11-20 17:03:10.245965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.831 [2024-11-20 17:03:10.246140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.831 [2024-11-20 17:03:10.246264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.831 [2024-11-20 17:03:10.263440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.831 [2024-11-20 17:03:10.263640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.831 [2024-11-20 17:03:10.263766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.831 [2024-11-20 17:03:10.280006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.831 [2024-11-20 17:03:10.280205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.831 [2024-11-20 17:03:10.280367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.831 [2024-11-20 17:03:10.296304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.831 [2024-11-20 17:03:10.296507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.831 [2024-11-20 17:03:10.296640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.831 [2024-11-20 17:03:10.313412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.831 [2024-11-20 17:03:10.313563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.831 [2024-11-20 17:03:10.313581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.831 [2024-11-20 17:03:10.330685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.831 [2024-11-20 17:03:10.330724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.831 [2024-11-20 17:03:10.330754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.831 [2024-11-20 17:03:10.346812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.831 [2024-11-20 17:03:10.346857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.831 [2024-11-20 17:03:10.346885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.831 [2024-11-20 17:03:10.364800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7872d0) 00:17:51.831 [2024-11-20 17:03:10.364865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.831 [2024-11-20 17:03:10.364895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.090 15750.00 IOPS, 61.52 MiB/s 00:17:52.090 Latency(us) 00:17:52.090 [2024-11-20T17:03:10.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.090 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:52.090 nvme0n1 : 2.01 15784.20 61.66 0.00 0.00 8103.44 7089.80 31218.97 00:17:52.090 [2024-11-20T17:03:10.626Z] =================================================================================================================== 00:17:52.090 [2024-11-20T17:03:10.626Z] Total : 15784.20 61.66 0.00 0.00 8103.44 7089.80 31218.97 00:17:52.090 { 00:17:52.090 "results": [ 00:17:52.090 { 00:17:52.090 "job": "nvme0n1", 00:17:52.090 "core_mask": "0x2", 00:17:52.090 "workload": "randread", 00:17:52.090 "status": "finished", 00:17:52.090 "queue_depth": 128, 00:17:52.090 "io_size": 4096, 00:17:52.090 "runtime": 2.011759, 00:17:52.090 "iops": 15784.196814827223, 00:17:52.090 "mibps": 61.65701880791884, 00:17:52.090 "io_failed": 0, 00:17:52.090 "io_timeout": 0, 00:17:52.090 "avg_latency_us": 8103.444670907603, 00:17:52.090 "min_latency_us": 7089.8036363636365, 00:17:52.090 "max_latency_us": 31218.967272727274 00:17:52.090 } 00:17:52.090 ], 00:17:52.090 "core_count": 1 00:17:52.090 } 00:17:52.090 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:52.090 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:52.090 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:52.090 | .driver_specific 00:17:52.090 | .nvme_error 00:17:52.090 | .status_code 00:17:52.090 | .command_transient_transport_error' 00:17:52.090 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:52.349 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 124 > 0 )) 00:17:52.349 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80361 00:17:52.349 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80361 ']' 00:17:52.349 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80361 00:17:52.349 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:17:52.349 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.349 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80361 00:17:52.349 killing process with pid 80361 00:17:52.349 Received shutdown signal, test time was about 2.000000 seconds 00:17:52.349 00:17:52.349 Latency(us) 00:17:52.349 [2024-11-20T17:03:10.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.349 [2024-11-20T17:03:10.885Z] =================================================================================================================== 00:17:52.349 [2024-11-20T17:03:10.885Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:52.349 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:52.349 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:52.349 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80361' 00:17:52.349 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80361 00:17:52.349 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80361 00:17:52.608 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:17:52.608 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:52.608 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:52.608 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:52.608 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:52.608 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80427 00:17:52.608 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:52.608 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80427 /var/tmp/bperf.sock 00:17:52.608 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80427 ']' 00:17:52.608 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:52.608 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.608 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:52.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:52.608 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.608 17:03:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:52.608 [2024-11-20 17:03:11.027897] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:17:52.608 [2024-11-20 17:03:11.028260] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80427 ] 00:17:52.608 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:52.608 Zero copy mechanism will not be used. 00:17:52.867 [2024-11-20 17:03:11.174453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.867 [2024-11-20 17:03:11.220820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.867 [2024-11-20 17:03:11.278421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:52.867 17:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:52.867 17:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:52.867 17:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:52.867 17:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:53.126 17:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:53.126 17:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.126 17:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:53.126 17:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.126 17:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:53.126 17:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:53.695 nvme0n1 00:17:53.695 17:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:53.695 17:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.695 17:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:53.695 17:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.695 17:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:53.695 17:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:53.695 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:53.695 Zero copy mechanism will not be used. 00:17:53.695 Running I/O for 2 seconds... 00:17:53.695 [2024-11-20 17:03:12.077781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.695 [2024-11-20 17:03:12.077858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.695 [2024-11-20 17:03:12.077874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.695 [2024-11-20 17:03:12.081923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.695 [2024-11-20 17:03:12.081958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.695 [2024-11-20 17:03:12.081972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.695 [2024-11-20 17:03:12.086108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.695 [2024-11-20 17:03:12.086145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.695 [2024-11-20 17:03:12.086157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.695 [2024-11-20 17:03:12.090165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.695 [2024-11-20 17:03:12.090202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.695 [2024-11-20 17:03:12.090215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.695 [2024-11-20 17:03:12.094243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.695 [2024-11-20 17:03:12.094281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.695 [2024-11-20 17:03:12.094293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.695 [2024-11-20 17:03:12.098417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.695 [2024-11-20 17:03:12.098455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.695 [2024-11-20 17:03:12.098468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.695 [2024-11-20 17:03:12.102477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.695 [2024-11-20 17:03:12.102515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.695 [2024-11-20 17:03:12.102527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.695 [2024-11-20 17:03:12.106648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.695 [2024-11-20 17:03:12.106686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.695 [2024-11-20 17:03:12.106698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.695 [2024-11-20 17:03:12.110763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.695 [2024-11-20 17:03:12.110801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.695 [2024-11-20 17:03:12.110813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.114900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.114936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.114948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.118936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.118970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.118982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.122913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.122950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.122962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.126875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.126910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.126922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.130895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.130928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.130940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.134891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.134928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.134940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.138917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.138952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.138964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.142935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.142971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.142983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.146817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.146862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.146874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.150708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.150745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.150757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.154904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.154939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.154951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.158856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.158895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.158907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.162818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.162862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.162874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.166818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.166862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.166874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.170782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.170819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.170862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.174846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.174879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.174891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.178795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.178840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.178853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.182808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.182851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.182863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.186815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.186861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.186873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.190800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.190865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.190879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.194905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.194941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.194953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.198862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.198897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.198909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.202888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.202922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.202934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.206873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.206908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.206920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.210872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.210907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.210919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.214778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.214811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.214838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 17:03:12.218784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.696 [2024-11-20 17:03:12.218818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 17:03:12.218876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.697 [2024-11-20 17:03:12.222747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.697 [2024-11-20 17:03:12.222781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.697 [2024-11-20 17:03:12.222810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.697 [2024-11-20 17:03:12.226778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.697 [2024-11-20 17:03:12.226815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.697 [2024-11-20 17:03:12.226856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.958 [2024-11-20 17:03:12.231334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.958 [2024-11-20 17:03:12.231375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.958 [2024-11-20 17:03:12.231389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.958 [2024-11-20 17:03:12.236113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.958 [2024-11-20 17:03:12.236152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.958 [2024-11-20 17:03:12.236166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.958 [2024-11-20 17:03:12.241020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.958 [2024-11-20 17:03:12.241074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.958 [2024-11-20 17:03:12.241086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.958 [2024-11-20 17:03:12.245832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.958 [2024-11-20 17:03:12.245885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.958 [2024-11-20 17:03:12.245915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.958 [2024-11-20 17:03:12.250719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.958 [2024-11-20 17:03:12.250791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.958 [2024-11-20 17:03:12.250804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.958 [2024-11-20 17:03:12.255425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.958 [2024-11-20 17:03:12.255466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.958 [2024-11-20 17:03:12.255480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.958 [2024-11-20 17:03:12.260059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.958 [2024-11-20 17:03:12.260108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.958 [2024-11-20 17:03:12.260122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.958 [2024-11-20 17:03:12.264701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.958 [2024-11-20 17:03:12.264758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.958 [2024-11-20 17:03:12.264772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.958 [2024-11-20 17:03:12.269315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.958 [2024-11-20 17:03:12.269355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.958 [2024-11-20 17:03:12.269368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.958 [2024-11-20 17:03:12.273951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.958 [2024-11-20 17:03:12.273988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.958 [2024-11-20 17:03:12.274001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.958 [2024-11-20 17:03:12.278348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.958 [2024-11-20 17:03:12.278388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.958 [2024-11-20 17:03:12.278402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.958 [2024-11-20 17:03:12.282843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.958 [2024-11-20 17:03:12.282893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.958 [2024-11-20 17:03:12.282922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.958 [2024-11-20 17:03:12.287235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.958 [2024-11-20 17:03:12.287301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.958 [2024-11-20 17:03:12.287315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.958 [2024-11-20 17:03:12.291572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.958 [2024-11-20 17:03:12.291625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.958 [2024-11-20 17:03:12.291652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.958 [2024-11-20 17:03:12.295883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.958 [2024-11-20 17:03:12.295947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.958 [2024-11-20 17:03:12.295960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.958 [2024-11-20 17:03:12.300117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.958 [2024-11-20 17:03:12.300154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.958 [2024-11-20 17:03:12.300166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.958 [2024-11-20 17:03:12.304396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.958 [2024-11-20 17:03:12.304436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.958 [2024-11-20 17:03:12.304449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.958 [2024-11-20 17:03:12.308811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.958 [2024-11-20 17:03:12.308859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.958 [2024-11-20 17:03:12.308888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.958 [2024-11-20 17:03:12.312909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.958 [2024-11-20 17:03:12.312943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.958 [2024-11-20 17:03:12.312955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.958 [2024-11-20 17:03:12.316834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.958 [2024-11-20 17:03:12.316882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.316910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.320997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.321032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.321045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.325028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.325064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.325077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.329056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.329092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.329103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.333054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.333090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.333102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.337055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.337090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.337102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.340989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.341026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.341038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.345044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.345079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.345090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.349033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.349068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.349080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.353222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.353258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.353271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.357297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.357334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.357346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.361420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.361455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.361468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.365413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.365450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.365462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.369533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.369570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.369582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.373674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.373712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.373723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.377762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.377798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.377810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.381882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.381917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.381928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.385863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.385926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.385938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.390037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.390073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.390085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.394031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.394066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.394078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.398026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.398061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.398073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.402036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.402072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.402084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.406034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.406070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.406082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.410011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.410045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.410058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.414230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.414282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.414294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.418054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.418090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.418102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.422044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.422079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.959 [2024-11-20 17:03:12.422091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.959 [2024-11-20 17:03:12.426018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.959 [2024-11-20 17:03:12.426054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.960 [2024-11-20 17:03:12.426065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.960 [2024-11-20 17:03:12.429968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.960 [2024-11-20 17:03:12.430003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.960 [2024-11-20 17:03:12.430015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.960 [2024-11-20 17:03:12.433957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.960 [2024-11-20 17:03:12.433993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.960 [2024-11-20 17:03:12.434004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.960 [2024-11-20 17:03:12.437918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.960 [2024-11-20 17:03:12.437952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.960 [2024-11-20 17:03:12.437965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.960 [2024-11-20 17:03:12.441926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.960 [2024-11-20 17:03:12.441961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.960 [2024-11-20 17:03:12.441972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.960 [2024-11-20 17:03:12.446020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.960 [2024-11-20 17:03:12.446056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.960 [2024-11-20 17:03:12.446068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.960 [2024-11-20 17:03:12.449947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.960 [2024-11-20 17:03:12.449982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.960 [2024-11-20 17:03:12.449994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.960 [2024-11-20 17:03:12.454003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.960 [2024-11-20 17:03:12.454039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.960 [2024-11-20 17:03:12.454051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.960 [2024-11-20 17:03:12.458109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.960 [2024-11-20 17:03:12.458146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.960 [2024-11-20 17:03:12.458158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.960 [2024-11-20 17:03:12.462153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.960 [2024-11-20 17:03:12.462189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.960 [2024-11-20 17:03:12.462201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.960 [2024-11-20 17:03:12.466200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.960 [2024-11-20 17:03:12.466237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.960 [2024-11-20 17:03:12.466249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.960 [2024-11-20 17:03:12.470184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.960 [2024-11-20 17:03:12.470220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.960 [2024-11-20 17:03:12.470232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.960 [2024-11-20 17:03:12.474218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.960 [2024-11-20 17:03:12.474282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.960 [2024-11-20 17:03:12.474295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:53.960 [2024-11-20 17:03:12.478207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.960 [2024-11-20 17:03:12.478254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.960 [2024-11-20 17:03:12.478282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.960 [2024-11-20 17:03:12.482221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.960 [2024-11-20 17:03:12.482274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.960 [2024-11-20 17:03:12.482303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:53.960 [2024-11-20 17:03:12.486289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.960 [2024-11-20 17:03:12.486327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.960 [2024-11-20 17:03:12.486339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:53.960 [2024-11-20 17:03:12.490248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:53.960 [2024-11-20 17:03:12.490300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.960 [2024-11-20 17:03:12.490313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.220 [2024-11-20 17:03:12.494239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.220 [2024-11-20 17:03:12.494290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.220 [2024-11-20 17:03:12.494302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.220 [2024-11-20 17:03:12.498251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.220 [2024-11-20 17:03:12.498304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.220 [2024-11-20 17:03:12.498316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.220 [2024-11-20 17:03:12.502232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.502269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.502281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.506226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.506263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.506275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.510233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.510285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.510297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.514159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.514195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.514207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.518174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.518211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.518223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.522220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.522254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.522266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.526209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.526245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.526257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.530199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.530235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.530247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.534200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.534235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.534247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.538235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.538272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.538284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.542238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.542274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.542286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.546195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.546232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.546244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.550349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.550387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.550400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.554357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.554392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.554404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.558418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.558454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.558466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.562949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.562985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.562998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.567243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.567313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.567326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.571737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.571774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.571787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.576091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.576126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.576138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.580454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.580495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.580509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.584789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.584837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.584865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.589336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.589530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.589548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.593922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.593976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.593989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.598380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.598420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.598433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.602942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.603012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.603026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.607653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.607705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.607717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.221 [2024-11-20 17:03:12.612044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.221 [2024-11-20 17:03:12.612081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.221 [2024-11-20 17:03:12.612093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.616436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.616476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.616489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.620957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.620991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.621003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.625182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.625218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.625230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.629487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.629526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.629540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.633880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.633945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.633959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.638165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.638201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.638213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.642489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.642529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.642542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.646975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.647013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.647027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.651611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.651665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.651677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.656224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.656288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.656301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.660421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.660461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.660475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.665057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.665097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.665111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.669613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.669653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.669666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.674182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.674251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.674263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.678691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.678729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.678742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.683260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.683296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.683309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.687778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.687813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.687851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.692421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.692460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.692473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.696886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.696937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.696951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.701264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.701442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.701459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.705947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.705984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.705997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.710512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.710550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.710563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.715094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.715131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.715159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.719732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.719784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.719797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.724039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.724076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.724091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.728392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.728430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.728444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.222 [2024-11-20 17:03:12.732783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.222 [2024-11-20 17:03:12.732821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.222 [2024-11-20 17:03:12.732866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.223 [2024-11-20 17:03:12.737295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.223 [2024-11-20 17:03:12.737334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.223 [2024-11-20 17:03:12.737347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.223 [2024-11-20 17:03:12.741892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.223 [2024-11-20 17:03:12.741929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.223 [2024-11-20 17:03:12.741943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.223 [2024-11-20 17:03:12.746337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.223 [2024-11-20 17:03:12.746378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.223 [2024-11-20 17:03:12.746392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.223 [2024-11-20 17:03:12.750674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.223 [2024-11-20 17:03:12.750714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.223 [2024-11-20 17:03:12.750728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.488 [2024-11-20 17:03:12.755055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.488 [2024-11-20 17:03:12.755092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.488 [2024-11-20 17:03:12.755105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.488 [2024-11-20 17:03:12.759315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.488 [2024-11-20 17:03:12.759514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.488 [2024-11-20 17:03:12.759659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.488 [2024-11-20 17:03:12.764220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.488 [2024-11-20 17:03:12.764411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.488 [2024-11-20 17:03:12.764554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.488 [2024-11-20 17:03:12.768950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.488 [2024-11-20 17:03:12.768988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.488 [2024-11-20 17:03:12.769001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.488 [2024-11-20 17:03:12.773114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.488 [2024-11-20 17:03:12.773150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.488 [2024-11-20 17:03:12.773163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.488 [2024-11-20 17:03:12.777376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.488 [2024-11-20 17:03:12.777411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.488 [2024-11-20 17:03:12.777424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.488 [2024-11-20 17:03:12.781759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.488 [2024-11-20 17:03:12.781795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.488 [2024-11-20 17:03:12.781808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.488 [2024-11-20 17:03:12.785820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.488 [2024-11-20 17:03:12.785861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.488 [2024-11-20 17:03:12.785872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.488 [2024-11-20 17:03:12.789938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.488 [2024-11-20 17:03:12.789973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.488 [2024-11-20 17:03:12.789986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.488 [2024-11-20 17:03:12.794074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.488 [2024-11-20 17:03:12.794128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.488 [2024-11-20 17:03:12.794142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.488 [2024-11-20 17:03:12.798488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.488 [2024-11-20 17:03:12.798523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.488 [2024-11-20 17:03:12.798536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.488 [2024-11-20 17:03:12.802687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.802725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.802737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.806852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.806887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.806899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.811265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.811446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.811464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.815890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.815928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.815941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.820300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.820339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.820352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.824783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.824817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.824861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.829236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.829273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.829285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.833690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.833726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.833739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.838076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.838113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.838126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.842465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.842504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.842516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.846708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.846744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.846757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.850891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.850926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.850938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.855209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.855244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.855257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.859236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.859272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.859284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.863362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.863397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.863410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.867376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.867411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.867424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.871526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.871562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.871575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.875954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.876007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.876022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.880418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.880455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.880469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.884942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.884977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.884990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.889152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.889188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.889200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.893530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.893569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.893583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.898050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.898085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.898097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.902727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.902763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.902791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.907288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.907327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.907340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.911767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.489 [2024-11-20 17:03:12.911806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.489 [2024-11-20 17:03:12.911820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.489 [2024-11-20 17:03:12.916207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:12.916389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:12.916407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:12.920768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:12.920804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:12.920816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:12.924929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:12.924963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:12.924975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:12.929022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:12.929055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:12.929067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:12.933189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:12.933224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:12.933236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:12.937370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:12.937406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:12.937418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:12.941391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:12.941427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:12.941439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:12.945393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:12.945427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:12.945439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:12.949486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:12.949520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:12.949532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:12.953591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:12.953640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:12.953652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:12.957688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:12.957723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:12.957735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:12.961723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:12.961758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:12.961770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:12.965744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:12.965779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:12.965791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:12.969806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:12.969850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:12.969866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:12.973909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:12.973943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:12.973955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:12.977891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:12.977925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:12.977936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:12.982006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:12.982041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:12.982054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:12.985992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:12.986026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:12.986037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:12.990144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:12.990180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:12.990191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:12.994153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:12.994188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:12.994200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:12.998084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:12.998119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:12.998131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:13.002214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:13.002250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:13.002278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:13.006278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:13.006313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:13.006325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:13.010268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:13.010304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:13.010316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:13.014229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:13.014264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:13.014276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.490 [2024-11-20 17:03:13.018175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.490 [2024-11-20 17:03:13.018210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.490 [2024-11-20 17:03:13.018223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.751 [2024-11-20 17:03:13.022148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.751 [2024-11-20 17:03:13.022183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.751 [2024-11-20 17:03:13.022195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.026120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.026155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.026167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.030042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.030076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.030088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.034011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.034046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.034057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.038021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.038055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.038067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.042076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.042110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.042122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.046028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.046062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.046074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.049986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.050020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.050031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.054015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.054049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.054061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.057921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.057954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.057967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.061886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.061919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.061931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.065889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.065923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.065935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.752 7316.00 IOPS, 914.50 MiB/s [2024-11-20T17:03:13.288Z] [2024-11-20 17:03:13.071368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.071404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.071416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.075484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.075681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.075697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.079792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.079835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.079849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.083755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.083790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.083802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.087729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.087765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.087777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.091675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.091709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.091721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.095729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.095764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.095776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.099883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.099917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.099929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.103839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.103905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.103918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.107939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.107974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.107986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.111886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.111920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.111931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.115919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.115954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.115967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.119785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.119819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.119866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.123733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.123768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.123780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.127836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.752 [2024-11-20 17:03:13.127889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.752 [2024-11-20 17:03:13.127904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.752 [2024-11-20 17:03:13.132015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.132050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.132062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.136058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.136092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.136104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.140055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.140089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.140101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.144263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.144301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.144315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.148743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.148794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.148806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.153379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.153417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.153447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.157943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.157980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.157994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.162316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.162355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.162368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.166761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.166797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.166810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.171115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.171150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.171162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.175427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.175463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.175476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.179552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.179587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.179599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.183488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.183522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.183534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.187562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.187596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.187609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.191642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.191677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.191689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.195683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.195718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.195731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.199643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.199677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.199689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.203767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.203802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.203814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.207792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.207839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.207868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.211763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.211798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.211809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.215742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.215776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.215789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.219789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.219836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.219865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.223775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.223810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.223822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.227845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.227878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.227890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.231757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.231792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.231804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.235724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.235758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.235770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.239726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.753 [2024-11-20 17:03:13.239761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.753 [2024-11-20 17:03:13.239773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.753 [2024-11-20 17:03:13.243687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.754 [2024-11-20 17:03:13.243722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.754 [2024-11-20 17:03:13.243735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.754 [2024-11-20 17:03:13.247740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.754 [2024-11-20 17:03:13.247774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.754 [2024-11-20 17:03:13.247786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.754 [2024-11-20 17:03:13.251682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.754 [2024-11-20 17:03:13.251717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.754 [2024-11-20 17:03:13.251729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.754 [2024-11-20 17:03:13.255608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.754 [2024-11-20 17:03:13.255642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.754 [2024-11-20 17:03:13.255654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.754 [2024-11-20 17:03:13.259613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.754 [2024-11-20 17:03:13.259648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.754 [2024-11-20 17:03:13.259661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.754 [2024-11-20 17:03:13.263548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.754 [2024-11-20 17:03:13.263582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.754 [2024-11-20 17:03:13.263594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.754 [2024-11-20 17:03:13.267567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.754 [2024-11-20 17:03:13.267602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.754 [2024-11-20 17:03:13.267614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:54.754 [2024-11-20 17:03:13.271540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.754 [2024-11-20 17:03:13.271574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.754 [2024-11-20 17:03:13.271586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.754 [2024-11-20 17:03:13.275488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.754 [2024-11-20 17:03:13.275522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.754 [2024-11-20 17:03:13.275534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:54.754 [2024-11-20 17:03:13.279443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.754 [2024-11-20 17:03:13.279477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.754 [2024-11-20 17:03:13.279489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:54.754 [2024-11-20 17:03:13.283417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:54.754 [2024-11-20 17:03:13.283451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.754 [2024-11-20 17:03:13.283463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.015 [2024-11-20 17:03:13.287382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.015 [2024-11-20 17:03:13.287416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.015 [2024-11-20 17:03:13.287428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.015 [2024-11-20 17:03:13.291458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.015 [2024-11-20 17:03:13.291493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.015 [2024-11-20 17:03:13.291505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.015 [2024-11-20 17:03:13.295489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.015 [2024-11-20 17:03:13.295523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.015 [2024-11-20 17:03:13.295535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.015 [2024-11-20 17:03:13.299506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.015 [2024-11-20 17:03:13.299541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.015 [2024-11-20 17:03:13.299553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.015 [2024-11-20 17:03:13.303435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.015 [2024-11-20 17:03:13.303469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.015 [2024-11-20 17:03:13.303481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.307504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.307540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.307552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.311532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.311567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.311579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.315567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.315601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.315613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.319613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.319648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.319661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.323673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.323709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.323721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.327684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.327719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.327731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.331583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.331618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.331630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.335554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.335589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.335601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.339604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.339639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.339651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.343649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.343684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.343696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.347798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.347841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.347853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.351879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.351914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.351925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.355805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.355871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.355884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.359805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.359850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.359862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.363704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.363738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.363750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.367642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.367677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.367689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.371691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.371726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.371738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.375733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.375768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.375780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.379853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.379886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.379898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.384257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.384296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.384309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.388506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.388545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.388573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.392815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.392843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.392854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.397205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.397241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.397254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.401911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.401961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.401991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.406449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.406488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.406501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.410995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.411033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.411046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.016 [2024-11-20 17:03:13.415424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.016 [2024-11-20 17:03:13.415462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.016 [2024-11-20 17:03:13.415476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.419854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.419916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.419929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.424053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.424086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.424098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.428118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.428151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.428163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.432205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.432265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.432278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.436380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.436418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.436430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.440473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.440510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.440523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.444575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.444611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.444623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.448643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.448693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.448705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.452719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.452754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.452766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.456821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.456862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.456876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.460736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.460771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.460783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.464863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.464926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.464939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.468845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.468886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.468899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.472704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.472738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.472750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.476588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.476638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.476650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.480583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.480618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.480645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.484577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.484613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.484625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.488456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.488491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.488504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.492387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.492421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.492433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.496390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.496424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.496436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.500397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.500430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.500443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.504396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.504432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.504445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.508536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.508571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.508583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.512580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.512615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.512627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.516607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.516657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.516668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.520578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.520614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.520641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.017 [2024-11-20 17:03:13.524690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.017 [2024-11-20 17:03:13.524725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.017 [2024-11-20 17:03:13.524736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.018 [2024-11-20 17:03:13.528563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.018 [2024-11-20 17:03:13.528598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.018 [2024-11-20 17:03:13.528610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.018 [2024-11-20 17:03:13.532534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.018 [2024-11-20 17:03:13.532569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.018 [2024-11-20 17:03:13.532581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.018 [2024-11-20 17:03:13.536495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.018 [2024-11-20 17:03:13.536531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.018 [2024-11-20 17:03:13.536544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.018 [2024-11-20 17:03:13.540623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.018 [2024-11-20 17:03:13.540672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.018 [2024-11-20 17:03:13.540684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.018 [2024-11-20 17:03:13.544533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.018 [2024-11-20 17:03:13.544569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.018 [2024-11-20 17:03:13.544581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.548424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.280 [2024-11-20 17:03:13.548459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.280 [2024-11-20 17:03:13.548471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.552430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.280 [2024-11-20 17:03:13.552469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.280 [2024-11-20 17:03:13.552481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.556479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.280 [2024-11-20 17:03:13.556514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.280 [2024-11-20 17:03:13.556528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.560480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.280 [2024-11-20 17:03:13.560516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.280 [2024-11-20 17:03:13.560529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.564572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.280 [2024-11-20 17:03:13.564607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.280 [2024-11-20 17:03:13.564619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.568688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.280 [2024-11-20 17:03:13.568723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.280 [2024-11-20 17:03:13.568734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.572725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.280 [2024-11-20 17:03:13.572761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.280 [2024-11-20 17:03:13.572773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.576688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.280 [2024-11-20 17:03:13.576722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.280 [2024-11-20 17:03:13.576733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.580646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.280 [2024-11-20 17:03:13.580680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.280 [2024-11-20 17:03:13.580692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.584639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.280 [2024-11-20 17:03:13.584674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.280 [2024-11-20 17:03:13.584687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.588733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.280 [2024-11-20 17:03:13.588768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.280 [2024-11-20 17:03:13.588780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.592728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.280 [2024-11-20 17:03:13.592763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.280 [2024-11-20 17:03:13.592775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.596765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.280 [2024-11-20 17:03:13.596800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.280 [2024-11-20 17:03:13.596812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.600740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.280 [2024-11-20 17:03:13.600773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.280 [2024-11-20 17:03:13.600786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.604876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.280 [2024-11-20 17:03:13.604910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.280 [2024-11-20 17:03:13.604922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.608745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.280 [2024-11-20 17:03:13.608788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.280 [2024-11-20 17:03:13.608801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.613200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.280 [2024-11-20 17:03:13.613235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.280 [2024-11-20 17:03:13.613248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.617496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.280 [2024-11-20 17:03:13.617535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.280 [2024-11-20 17:03:13.617548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.621993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.280 [2024-11-20 17:03:13.622028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.280 [2024-11-20 17:03:13.622040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.626619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.280 [2024-11-20 17:03:13.626673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.280 [2024-11-20 17:03:13.626701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.631189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.280 [2024-11-20 17:03:13.631223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.280 [2024-11-20 17:03:13.631235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.635753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.280 [2024-11-20 17:03:13.635803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.280 [2024-11-20 17:03:13.635815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.640280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.280 [2024-11-20 17:03:13.640318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.280 [2024-11-20 17:03:13.640332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.644769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.280 [2024-11-20 17:03:13.644803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.280 [2024-11-20 17:03:13.644816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.280 [2024-11-20 17:03:13.649244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.649312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.649326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.653812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.653860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.653889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.658374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.658537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.658554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.663111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.663147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.663175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.667576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.667615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.667629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.672008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.672043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.672055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.676386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.676425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.676439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.680813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.680860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.680889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.684912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.684946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.684957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.689004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.689037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.689049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.693102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.693136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.693147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.696999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.697034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.697046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.701032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.701066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.701077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.705052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.705086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.705097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.708970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.709003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.709015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.713051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.713086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.713097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.717030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.717064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.717076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.720989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.721022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.721034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.724910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.724945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.724956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.728934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.728968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.728980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.732886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.732919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.732931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.736845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.736887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.736899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.740754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.740980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.740998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.744962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.744997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.745008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.748970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.749004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.749015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.752945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.752978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.752990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.757048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.281 [2024-11-20 17:03:13.757082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.281 [2024-11-20 17:03:13.757094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.281 [2024-11-20 17:03:13.760962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.282 [2024-11-20 17:03:13.760996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.282 [2024-11-20 17:03:13.761007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.282 [2024-11-20 17:03:13.764909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.282 [2024-11-20 17:03:13.764943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.282 [2024-11-20 17:03:13.764955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.282 [2024-11-20 17:03:13.768927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.282 [2024-11-20 17:03:13.768961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.282 [2024-11-20 17:03:13.768973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.282 [2024-11-20 17:03:13.772885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.282 [2024-11-20 17:03:13.772927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.282 [2024-11-20 17:03:13.772939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.282 [2024-11-20 17:03:13.776918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.282 [2024-11-20 17:03:13.776951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.282 [2024-11-20 17:03:13.776963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.282 [2024-11-20 17:03:13.780900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.282 [2024-11-20 17:03:13.780933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.282 [2024-11-20 17:03:13.780945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.282 [2024-11-20 17:03:13.784885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.282 [2024-11-20 17:03:13.784927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.282 [2024-11-20 17:03:13.784938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.282 [2024-11-20 17:03:13.788827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.282 [2024-11-20 17:03:13.789057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.282 [2024-11-20 17:03:13.789073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.282 [2024-11-20 17:03:13.793150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.282 [2024-11-20 17:03:13.793185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.282 [2024-11-20 17:03:13.793198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.282 [2024-11-20 17:03:13.797114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.282 [2024-11-20 17:03:13.797149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.282 [2024-11-20 17:03:13.797161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.282 [2024-11-20 17:03:13.801083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.282 [2024-11-20 17:03:13.801116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.282 [2024-11-20 17:03:13.801128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.282 [2024-11-20 17:03:13.805017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.282 [2024-11-20 17:03:13.805051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.282 [2024-11-20 17:03:13.805063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.282 [2024-11-20 17:03:13.808924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.282 [2024-11-20 17:03:13.808957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.282 [2024-11-20 17:03:13.808969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.544 [2024-11-20 17:03:13.812866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.544 [2024-11-20 17:03:13.812909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.544 [2024-11-20 17:03:13.812921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.544 [2024-11-20 17:03:13.816842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.544 [2024-11-20 17:03:13.817070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.544 [2024-11-20 17:03:13.817087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.544 [2024-11-20 17:03:13.821166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.544 [2024-11-20 17:03:13.821202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.544 [2024-11-20 17:03:13.821214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.544 [2024-11-20 17:03:13.825130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.544 [2024-11-20 17:03:13.825164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.544 [2024-11-20 17:03:13.825175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.544 [2024-11-20 17:03:13.829269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.544 [2024-11-20 17:03:13.829301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.544 [2024-11-20 17:03:13.829313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.544 [2024-11-20 17:03:13.833341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.544 [2024-11-20 17:03:13.833376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.544 [2024-11-20 17:03:13.833388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.544 [2024-11-20 17:03:13.837551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.544 [2024-11-20 17:03:13.837590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.544 [2024-11-20 17:03:13.837602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.544 [2024-11-20 17:03:13.841998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.544 [2024-11-20 17:03:13.842033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.544 [2024-11-20 17:03:13.842046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.544 [2024-11-20 17:03:13.846656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.544 [2024-11-20 17:03:13.846693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.544 [2024-11-20 17:03:13.846717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.544 [2024-11-20 17:03:13.851237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.544 [2024-11-20 17:03:13.851302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.544 [2024-11-20 17:03:13.851315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.544 [2024-11-20 17:03:13.855698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.544 [2024-11-20 17:03:13.855737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.544 [2024-11-20 17:03:13.855749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.544 [2024-11-20 17:03:13.860606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.544 [2024-11-20 17:03:13.860812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.544 [2024-11-20 17:03:13.860853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.544 [2024-11-20 17:03:13.865337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.544 [2024-11-20 17:03:13.865373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.544 [2024-11-20 17:03:13.865386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.544 [2024-11-20 17:03:13.869574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.544 [2024-11-20 17:03:13.869608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.544 [2024-11-20 17:03:13.869621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.544 [2024-11-20 17:03:13.873764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.544 [2024-11-20 17:03:13.873798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.544 [2024-11-20 17:03:13.873810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.544 [2024-11-20 17:03:13.878322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.544 [2024-11-20 17:03:13.878357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.544 [2024-11-20 17:03:13.878370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.544 [2024-11-20 17:03:13.882626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.544 [2024-11-20 17:03:13.882661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.882674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.887343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.887525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.887542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.892142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.892179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.892192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.896714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.896752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.896765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.901335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.901516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.901653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.906400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.906576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.906774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.911518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.911699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.911822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.916361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.916534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.916698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.921339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.921379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.921393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.925818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.925864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.925878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.930206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.930400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.930418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.934810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.934855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.934869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.939272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.939310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.939322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.943650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.943689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.943703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.947956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.947991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.948004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.951995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.952029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.952042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.956116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.956153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.956165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.960439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.960477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.960491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.964763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.964801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.964814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.968898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.968933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.968946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.973018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.973054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.973067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.977302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.977341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.977353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.981632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.981668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.981680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.985817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.985864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.985878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.989961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.989999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.545 [2024-11-20 17:03:13.990011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.545 [2024-11-20 17:03:13.994172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.545 [2024-11-20 17:03:13.994210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.546 [2024-11-20 17:03:13.994223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.546 [2024-11-20 17:03:13.998299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.546 [2024-11-20 17:03:13.998337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.546 [2024-11-20 17:03:13.998349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.546 [2024-11-20 17:03:14.002574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.546 [2024-11-20 17:03:14.002612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.546 [2024-11-20 17:03:14.002624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.546 [2024-11-20 17:03:14.006897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.546 [2024-11-20 17:03:14.006934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.546 [2024-11-20 17:03:14.006948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.546 [2024-11-20 17:03:14.011290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.546 [2024-11-20 17:03:14.011326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.546 [2024-11-20 17:03:14.011339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.546 [2024-11-20 17:03:14.015505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.546 [2024-11-20 17:03:14.015541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.546 [2024-11-20 17:03:14.015554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.546 [2024-11-20 17:03:14.019928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.546 [2024-11-20 17:03:14.019977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.546 [2024-11-20 17:03:14.019992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.546 [2024-11-20 17:03:14.024293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.546 [2024-11-20 17:03:14.024472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.546 [2024-11-20 17:03:14.024491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.546 [2024-11-20 17:03:14.028830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.546 [2024-11-20 17:03:14.029037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.546 [2024-11-20 17:03:14.029211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.546 [2024-11-20 17:03:14.033493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.546 [2024-11-20 17:03:14.033681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.546 [2024-11-20 17:03:14.033819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.546 [2024-11-20 17:03:14.038262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.546 [2024-11-20 17:03:14.038457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.546 [2024-11-20 17:03:14.038628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.546 [2024-11-20 17:03:14.042976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.546 [2024-11-20 17:03:14.043156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.546 [2024-11-20 17:03:14.043324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.546 [2024-11-20 17:03:14.047675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.546 [2024-11-20 17:03:14.047934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.546 [2024-11-20 17:03:14.048093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.546 [2024-11-20 17:03:14.052312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.546 [2024-11-20 17:03:14.052495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.546 [2024-11-20 17:03:14.052637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.546 [2024-11-20 17:03:14.057113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.546 [2024-11-20 17:03:14.057308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.546 [2024-11-20 17:03:14.057562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:55.546 [2024-11-20 17:03:14.061906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.546 [2024-11-20 17:03:14.062100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.546 [2024-11-20 17:03:14.062232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.546 [2024-11-20 17:03:14.066550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.546 [2024-11-20 17:03:14.066742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.546 [2024-11-20 17:03:14.066919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:55.546 7362.50 IOPS, 920.31 MiB/s [2024-11-20T17:03:14.082Z] [2024-11-20 17:03:14.072515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c95400) 00:17:55.546 [2024-11-20 17:03:14.072711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.546 [2024-11-20 17:03:14.072871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:55.546 00:17:55.546 Latency(us) 00:17:55.546 [2024-11-20T17:03:14.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.546 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:55.546 nvme0n1 : 2.00 7355.16 919.40 0.00 0.00 2171.80 1787.35 10962.39 00:17:55.546 [2024-11-20T17:03:14.082Z] =================================================================================================================== 00:17:55.546 [2024-11-20T17:03:14.082Z] Total : 7355.16 919.40 0.00 0.00 2171.80 1787.35 10962.39 00:17:55.805 { 00:17:55.805 "results": [ 00:17:55.805 { 00:17:55.805 "job": "nvme0n1", 00:17:55.805 "core_mask": "0x2", 00:17:55.805 "workload": "randread", 00:17:55.805 "status": "finished", 00:17:55.805 "queue_depth": 16, 00:17:55.805 "io_size": 131072, 00:17:55.806 "runtime": 2.004171, 00:17:55.806 "iops": 7355.160812126311, 00:17:55.806 "mibps": 919.3951015157888, 00:17:55.806 "io_failed": 0, 00:17:55.806 "io_timeout": 0, 00:17:55.806 "avg_latency_us": 2171.7956896966407, 00:17:55.806 "min_latency_us": 1787.3454545454545, 00:17:55.806 "max_latency_us": 10962.385454545454 00:17:55.806 } 00:17:55.806 ], 00:17:55.806 "core_count": 1 00:17:55.806 } 00:17:55.806 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:55.806 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:55.806 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:55.806 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:55.806 | .driver_specific 00:17:55.806 | .nvme_error 00:17:55.806 | .status_code 00:17:55.806 | .command_transient_transport_error' 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 476 > 0 )) 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80427 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80427 ']' 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80427 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80427 00:17:56.065 killing process with pid 80427 00:17:56.065 Received shutdown signal, test time was about 2.000000 seconds 00:17:56.065 00:17:56.065 Latency(us) 00:17:56.065 [2024-11-20T17:03:14.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.065 [2024-11-20T17:03:14.601Z] =================================================================================================================== 00:17:56.065 [2024-11-20T17:03:14.601Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80427' 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80427 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80427 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80474 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80474 /var/tmp/bperf.sock 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80474 ']' 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:56.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:56.065 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:56.324 [2024-11-20 17:03:14.641367] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:17:56.324 [2024-11-20 17:03:14.641681] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80474 ] 00:17:56.324 [2024-11-20 17:03:14.789803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.325 [2024-11-20 17:03:14.847663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.595 [2024-11-20 17:03:14.903073] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:56.595 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.595 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:56.595 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:56.595 17:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:56.875 17:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:56.875 17:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.875 17:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:56.875 17:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.875 17:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:56.875 17:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:57.133 nvme0n1 00:17:57.133 17:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:57.133 17:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.133 17:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:57.133 17:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.133 17:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:57.133 17:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:57.392 Running I/O for 2 seconds... 00:17:57.392 [2024-11-20 17:03:15.704295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016efb048 00:17:57.392 [2024-11-20 17:03:15.705845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.392 [2024-11-20 17:03:15.705895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:57.392 [2024-11-20 17:03:15.719489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016efb8b8 00:17:57.392 [2024-11-20 17:03:15.721034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.392 [2024-11-20 17:03:15.721070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.392 [2024-11-20 17:03:15.734380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016efc128 00:17:57.392 [2024-11-20 17:03:15.735743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.392 [2024-11-20 17:03:15.735778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:57.392 [2024-11-20 17:03:15.749257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016efc998 00:17:57.392 [2024-11-20 17:03:15.750602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.392 [2024-11-20 17:03:15.750636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:57.392 [2024-11-20 17:03:15.763801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016efd208 00:17:57.392 [2024-11-20 17:03:15.765395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.392 [2024-11-20 17:03:15.765424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:57.392 [2024-11-20 17:03:15.778653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016efda78 00:17:57.392 [2024-11-20 17:03:15.780011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.392 [2024-11-20 17:03:15.780045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:57.392 [2024-11-20 17:03:15.793356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016efe2e8 00:17:57.392 [2024-11-20 17:03:15.794645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.392 [2024-11-20 17:03:15.794678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:57.392 [2024-11-20 17:03:15.807965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016efeb58 00:17:57.392 [2024-11-20 17:03:15.809312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.392 [2024-11-20 17:03:15.809361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:57.392 [2024-11-20 17:03:15.828447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016efef90 00:17:57.392 [2024-11-20 17:03:15.831052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.392 [2024-11-20 17:03:15.831082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:57.392 [2024-11-20 17:03:15.843166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016efeb58 00:17:57.392 [2024-11-20 17:03:15.845822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.392 [2024-11-20 17:03:15.845863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:57.392 [2024-11-20 17:03:15.858004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016efe2e8 00:17:57.392 [2024-11-20 17:03:15.860603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.392 [2024-11-20 17:03:15.860637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:57.392 [2024-11-20 17:03:15.872737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016efda78 00:17:57.392 [2024-11-20 17:03:15.875452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.392 [2024-11-20 17:03:15.875483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:57.392 [2024-11-20 17:03:15.889134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016efd208 00:17:57.392 [2024-11-20 17:03:15.891585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.392 [2024-11-20 17:03:15.891623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:57.392 [2024-11-20 17:03:15.906001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016efc998 00:17:57.392 [2024-11-20 17:03:15.908576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.392 [2024-11-20 17:03:15.908641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:57.392 [2024-11-20 17:03:15.922827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016efc128 00:17:57.651 [2024-11-20 17:03:15.925319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.651 [2024-11-20 17:03:15.925486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:57.651 [2024-11-20 17:03:15.938891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016efb8b8 00:17:57.651 [2024-11-20 17:03:15.941324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.651 [2024-11-20 17:03:15.941361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:57.651 [2024-11-20 17:03:15.954005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016efb048 00:17:57.652 [2024-11-20 17:03:15.956270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.652 [2024-11-20 17:03:15.956428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.652 [2024-11-20 17:03:15.969098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016efa7d8 00:17:57.652 [2024-11-20 17:03:15.971280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.652 [2024-11-20 17:03:15.971315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:57.652 [2024-11-20 17:03:15.983676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef9f68 00:17:57.652 [2024-11-20 17:03:15.986066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.652 [2024-11-20 17:03:15.986100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:57.652 [2024-11-20 17:03:15.998796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef96f8 00:17:57.652 [2024-11-20 17:03:16.001073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.652 [2024-11-20 17:03:16.001108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:57.652 [2024-11-20 17:03:16.013609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef8e88 00:17:57.652 [2024-11-20 17:03:16.015827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.652 [2024-11-20 17:03:16.015870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:57.652 [2024-11-20 17:03:16.028691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef8618 00:17:57.652 [2024-11-20 17:03:16.030831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.652 [2024-11-20 17:03:16.031061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:57.652 [2024-11-20 17:03:16.043506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef7da8 00:17:57.652 [2024-11-20 17:03:16.045680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.652 [2024-11-20 17:03:16.045716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:57.652 [2024-11-20 17:03:16.057978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef7538 00:17:57.652 [2024-11-20 17:03:16.060023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.652 [2024-11-20 17:03:16.060208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:57.652 [2024-11-20 17:03:16.072527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef6cc8 00:17:57.652 [2024-11-20 17:03:16.074785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.652 [2024-11-20 17:03:16.074815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:57.652 [2024-11-20 17:03:16.087007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef6458 00:17:57.652 [2024-11-20 17:03:16.089382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.652 [2024-11-20 17:03:16.089415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:57.652 [2024-11-20 17:03:16.101882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef5be8 00:17:57.652 [2024-11-20 17:03:16.103903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.652 [2024-11-20 17:03:16.103937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:57.652 [2024-11-20 17:03:16.116087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef5378 00:17:57.652 [2024-11-20 17:03:16.118449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.652 [2024-11-20 17:03:16.118483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:57.652 [2024-11-20 17:03:16.130781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef4b08 00:17:57.652 [2024-11-20 17:03:16.132922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.652 [2024-11-20 17:03:16.132953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:57.652 [2024-11-20 17:03:16.145202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef4298 00:17:57.652 [2024-11-20 17:03:16.147243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.652 [2024-11-20 17:03:16.147444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:57.652 [2024-11-20 17:03:16.162648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef3a28 00:17:57.652 [2024-11-20 17:03:16.165076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.652 [2024-11-20 17:03:16.165266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:57.652 [2024-11-20 17:03:16.180311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef31b8 00:17:57.652 [2024-11-20 17:03:16.182429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.652 [2024-11-20 17:03:16.182577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:57.911 [2024-11-20 17:03:16.196677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef2948 00:17:57.911 [2024-11-20 17:03:16.198781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.911 [2024-11-20 17:03:16.198813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.911 [2024-11-20 17:03:16.211729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef20d8 00:17:57.911 [2024-11-20 17:03:16.213938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.911 [2024-11-20 17:03:16.213970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:57.911 [2024-11-20 17:03:16.226186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef1868 00:17:57.911 [2024-11-20 17:03:16.228036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.911 [2024-11-20 17:03:16.228069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:57.911 [2024-11-20 17:03:16.240286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef0ff8 00:17:57.911 [2024-11-20 17:03:16.242503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.911 [2024-11-20 17:03:16.242535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:57.911 [2024-11-20 17:03:16.254912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef0788 00:17:57.911 [2024-11-20 17:03:16.257078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.911 [2024-11-20 17:03:16.257111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:57.911 [2024-11-20 17:03:16.269512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016eeff18 00:17:57.911 [2024-11-20 17:03:16.271414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.911 [2024-11-20 17:03:16.271460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:57.911 [2024-11-20 17:03:16.284743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016eef6a8 00:17:57.911 [2024-11-20 17:03:16.286770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.911 [2024-11-20 17:03:16.286933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:57.911 [2024-11-20 17:03:16.300814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016eeee38 00:17:57.911 [2024-11-20 17:03:16.302750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.911 [2024-11-20 17:03:16.302786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:57.911 [2024-11-20 17:03:16.317205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016eee5c8 00:17:57.911 [2024-11-20 17:03:16.319137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.911 [2024-11-20 17:03:16.319176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:57.911 [2024-11-20 17:03:16.333776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016eedd58 00:17:57.911 [2024-11-20 17:03:16.335721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.911 [2024-11-20 17:03:16.335905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:57.911 [2024-11-20 17:03:16.350151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016eed4e8 00:17:57.911 [2024-11-20 17:03:16.352282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.911 [2024-11-20 17:03:16.352318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:57.911 [2024-11-20 17:03:16.365901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016eecc78 00:17:57.911 [2024-11-20 17:03:16.367646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.911 [2024-11-20 17:03:16.367680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:57.911 [2024-11-20 17:03:16.381299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016eec408 00:17:57.911 [2024-11-20 17:03:16.383033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.911 [2024-11-20 17:03:16.383067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:57.911 [2024-11-20 17:03:16.396483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016eebb98 00:17:57.911 [2024-11-20 17:03:16.398450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.911 [2024-11-20 17:03:16.398611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:57.911 [2024-11-20 17:03:16.412013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016eeb328 00:17:57.911 [2024-11-20 17:03:16.413970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.911 [2024-11-20 17:03:16.414159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:57.911 [2024-11-20 17:03:16.427950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016eeaab8 00:17:57.911 [2024-11-20 17:03:16.429803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.911 [2024-11-20 17:03:16.430033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:57.911 [2024-11-20 17:03:16.443646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016eea248 00:17:57.911 [2024-11-20 17:03:16.445538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.911 [2024-11-20 17:03:16.445733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.170 [2024-11-20 17:03:16.459416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee99d8 00:17:58.170 [2024-11-20 17:03:16.461283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.170 [2024-11-20 17:03:16.461477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:58.170 [2024-11-20 17:03:16.475929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee9168 00:17:58.170 [2024-11-20 17:03:16.477833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.170 [2024-11-20 17:03:16.478048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:58.170 [2024-11-20 17:03:16.492146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee88f8 00:17:58.171 [2024-11-20 17:03:16.493934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.171 [2024-11-20 17:03:16.494116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:58.171 [2024-11-20 17:03:16.507429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee8088 00:17:58.171 [2024-11-20 17:03:16.509356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.171 [2024-11-20 17:03:16.509532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:58.171 [2024-11-20 17:03:16.524698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee7818 00:17:58.171 [2024-11-20 17:03:16.526684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.171 [2024-11-20 17:03:16.526739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:58.171 [2024-11-20 17:03:16.542432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee6fa8 00:17:58.171 [2024-11-20 17:03:16.544557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.171 [2024-11-20 17:03:16.544611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:58.171 [2024-11-20 17:03:16.560605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee6738 00:17:58.171 [2024-11-20 17:03:16.562476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.171 [2024-11-20 17:03:16.562513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:58.171 [2024-11-20 17:03:16.578447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee5ec8 00:17:58.171 [2024-11-20 17:03:16.580213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.171 [2024-11-20 17:03:16.580273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:58.171 [2024-11-20 17:03:16.595961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee5658 00:17:58.171 [2024-11-20 17:03:16.597813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.171 [2024-11-20 17:03:16.597856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:58.171 [2024-11-20 17:03:16.613769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee4de8 00:17:58.171 [2024-11-20 17:03:16.615627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.171 [2024-11-20 17:03:16.615658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:58.171 [2024-11-20 17:03:16.631507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee4578 00:17:58.171 [2024-11-20 17:03:16.633306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.171 [2024-11-20 17:03:16.633506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:58.171 [2024-11-20 17:03:16.649484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee3d08 00:17:58.171 [2024-11-20 17:03:16.651364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.171 [2024-11-20 17:03:16.651395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:58.171 [2024-11-20 17:03:16.667349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee3498 00:17:58.171 [2024-11-20 17:03:16.669198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.171 [2024-11-20 17:03:16.669381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:58.171 15941.00 IOPS, 62.27 MiB/s [2024-11-20T17:03:16.707Z] [2024-11-20 17:03:16.686333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee2c28 00:17:58.171 [2024-11-20 17:03:16.687913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.171 [2024-11-20 17:03:16.687982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:58.431 [2024-11-20 17:03:16.704438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee23b8 00:17:58.431 [2024-11-20 17:03:16.706215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.431 [2024-11-20 17:03:16.706252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:58.431 [2024-11-20 17:03:16.722375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee1b48 00:17:58.431 [2024-11-20 17:03:16.724164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.431 [2024-11-20 17:03:16.724239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.431 [2024-11-20 17:03:16.741071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee12d8 00:17:58.431 [2024-11-20 17:03:16.742625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.431 [2024-11-20 17:03:16.742667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:58.431 [2024-11-20 17:03:16.758418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee0a68 00:17:58.431 [2024-11-20 17:03:16.759947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.431 [2024-11-20 17:03:16.759994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:58.431 [2024-11-20 17:03:16.775533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee01f8 00:17:58.431 [2024-11-20 17:03:16.777074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.431 [2024-11-20 17:03:16.777127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:58.431 [2024-11-20 17:03:16.792765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016edf988 00:17:58.431 [2024-11-20 17:03:16.794344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.431 [2024-11-20 17:03:16.794383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:58.431 [2024-11-20 17:03:16.809710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016edf118 00:17:58.431 [2024-11-20 17:03:16.811332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.431 [2024-11-20 17:03:16.811364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:58.431 [2024-11-20 17:03:16.827067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ede8a8 00:17:58.431 [2024-11-20 17:03:16.828850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.431 [2024-11-20 17:03:16.828913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:58.431 [2024-11-20 17:03:16.844257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ede038 00:17:58.431 [2024-11-20 17:03:16.845670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.431 [2024-11-20 17:03:16.845709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:58.431 [2024-11-20 17:03:16.868621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ede038 00:17:58.431 [2024-11-20 17:03:16.871284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.431 [2024-11-20 17:03:16.871326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:58.431 [2024-11-20 17:03:16.886493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ede8a8 00:17:58.431 [2024-11-20 17:03:16.889294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.431 [2024-11-20 17:03:16.889528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:58.431 [2024-11-20 17:03:16.904407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016edf118 00:17:58.431 [2024-11-20 17:03:16.907070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.431 [2024-11-20 17:03:16.907266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:58.431 [2024-11-20 17:03:16.922652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016edf988 00:17:58.431 [2024-11-20 17:03:16.925353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.431 [2024-11-20 17:03:16.925509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:58.431 [2024-11-20 17:03:16.940624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee01f8 00:17:58.431 [2024-11-20 17:03:16.943376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.431 [2024-11-20 17:03:16.943411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:58.431 [2024-11-20 17:03:16.958523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee0a68 00:17:58.431 [2024-11-20 17:03:16.961460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.431 [2024-11-20 17:03:16.961499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:58.691 [2024-11-20 17:03:16.976370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee12d8 00:17:58.691 [2024-11-20 17:03:16.978958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.691 [2024-11-20 17:03:16.979003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:58.691 [2024-11-20 17:03:16.993603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee1b48 00:17:58.691 [2024-11-20 17:03:16.996455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.691 [2024-11-20 17:03:16.996493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:58.691 [2024-11-20 17:03:17.011451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee23b8 00:17:58.691 [2024-11-20 17:03:17.014134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.691 [2024-11-20 17:03:17.014198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:58.691 [2024-11-20 17:03:17.028715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee2c28 00:17:58.691 [2024-11-20 17:03:17.031304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.691 [2024-11-20 17:03:17.031384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:58.691 [2024-11-20 17:03:17.046484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee3498 00:17:58.691 [2024-11-20 17:03:17.049046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.691 [2024-11-20 17:03:17.049248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:58.691 [2024-11-20 17:03:17.064937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee3d08 00:17:58.691 [2024-11-20 17:03:17.067607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.691 [2024-11-20 17:03:17.067646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:58.691 [2024-11-20 17:03:17.083063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee4578 00:17:58.691 [2024-11-20 17:03:17.085588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.691 [2024-11-20 17:03:17.085801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:58.691 [2024-11-20 17:03:17.100510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee4de8 00:17:58.691 [2024-11-20 17:03:17.102949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.691 [2024-11-20 17:03:17.103186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:58.691 [2024-11-20 17:03:17.117491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee5658 00:17:58.691 [2024-11-20 17:03:17.120075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.691 [2024-11-20 17:03:17.120107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:58.691 [2024-11-20 17:03:17.134622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee5ec8 00:17:58.691 [2024-11-20 17:03:17.137045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.691 [2024-11-20 17:03:17.137286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:58.691 [2024-11-20 17:03:17.152057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee6738 00:17:58.691 [2024-11-20 17:03:17.154407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.691 [2024-11-20 17:03:17.154445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:58.691 [2024-11-20 17:03:17.169019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee6fa8 00:17:58.691 [2024-11-20 17:03:17.171399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.691 [2024-11-20 17:03:17.171452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:58.691 [2024-11-20 17:03:17.186218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee7818 00:17:58.691 [2024-11-20 17:03:17.188501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.691 [2024-11-20 17:03:17.188541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:58.691 [2024-11-20 17:03:17.202980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee8088 00:17:58.691 [2024-11-20 17:03:17.205311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.691 [2024-11-20 17:03:17.205348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:58.691 [2024-11-20 17:03:17.220745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee88f8 00:17:58.691 [2024-11-20 17:03:17.223123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.691 [2024-11-20 17:03:17.223299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:58.951 [2024-11-20 17:03:17.238817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee9168 00:17:58.951 [2024-11-20 17:03:17.241115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.951 [2024-11-20 17:03:17.241315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:58.951 [2024-11-20 17:03:17.256435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ee99d8 00:17:58.951 [2024-11-20 17:03:17.258647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.951 [2024-11-20 17:03:17.258698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:58.951 [2024-11-20 17:03:17.273889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016eea248 00:17:58.951 [2024-11-20 17:03:17.276094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.951 [2024-11-20 17:03:17.276144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:58.951 [2024-11-20 17:03:17.291278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016eeaab8 00:17:58.951 [2024-11-20 17:03:17.293624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.951 [2024-11-20 17:03:17.293878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:58.951 [2024-11-20 17:03:17.309020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016eeb328 00:17:58.951 [2024-11-20 17:03:17.311477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.951 [2024-11-20 17:03:17.311667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:58.951 [2024-11-20 17:03:17.326900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016eebb98 00:17:58.951 [2024-11-20 17:03:17.329288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.951 [2024-11-20 17:03:17.329319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:58.951 [2024-11-20 17:03:17.344994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016eec408 00:17:58.951 [2024-11-20 17:03:17.347178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.951 [2024-11-20 17:03:17.347244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:58.951 [2024-11-20 17:03:17.362627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016eecc78 00:17:58.951 [2024-11-20 17:03:17.365004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.951 [2024-11-20 17:03:17.365036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:58.951 [2024-11-20 17:03:17.379922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016eed4e8 00:17:58.951 [2024-11-20 17:03:17.381992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.951 [2024-11-20 17:03:17.382028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:58.951 [2024-11-20 17:03:17.396768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016eedd58 00:17:58.951 [2024-11-20 17:03:17.398917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.951 [2024-11-20 17:03:17.398960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:58.951 [2024-11-20 17:03:17.414647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016eee5c8 00:17:58.951 [2024-11-20 17:03:17.416896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.951 [2024-11-20 17:03:17.416979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:58.951 [2024-11-20 17:03:17.433633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016eeee38 00:17:58.951 [2024-11-20 17:03:17.435712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.951 [2024-11-20 17:03:17.435749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:58.951 [2024-11-20 17:03:17.450809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016eef6a8 00:17:58.951 [2024-11-20 17:03:17.452919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.951 [2024-11-20 17:03:17.452977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:58.951 [2024-11-20 17:03:17.469983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016eeff18 00:17:58.951 [2024-11-20 17:03:17.472116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.951 [2024-11-20 17:03:17.472183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:59.211 [2024-11-20 17:03:17.487371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef0788 00:17:59.211 [2024-11-20 17:03:17.489323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.211 [2024-11-20 17:03:17.489498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:59.211 [2024-11-20 17:03:17.505216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef0ff8 00:17:59.211 [2024-11-20 17:03:17.507280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.211 [2024-11-20 17:03:17.507316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:59.211 [2024-11-20 17:03:17.522824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef1868 00:17:59.211 [2024-11-20 17:03:17.524825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.211 [2024-11-20 17:03:17.524864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:59.211 [2024-11-20 17:03:17.541063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef20d8 00:17:59.211 [2024-11-20 17:03:17.543196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.211 [2024-11-20 17:03:17.543233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:59.211 [2024-11-20 17:03:17.558782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef2948 00:17:59.211 [2024-11-20 17:03:17.560707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.211 [2024-11-20 17:03:17.560776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.211 [2024-11-20 17:03:17.576120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef31b8 00:17:59.211 [2024-11-20 17:03:17.577881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.211 [2024-11-20 17:03:17.577919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:59.211 [2024-11-20 17:03:17.593525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef3a28 00:17:59.211 [2024-11-20 17:03:17.595417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.211 [2024-11-20 17:03:17.595467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:59.211 [2024-11-20 17:03:17.611134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef4298 00:17:59.211 [2024-11-20 17:03:17.612898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.211 [2024-11-20 17:03:17.612936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:59.211 [2024-11-20 17:03:17.627780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef4b08 00:17:59.211 [2024-11-20 17:03:17.629744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.211 [2024-11-20 17:03:17.629790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:59.211 [2024-11-20 17:03:17.645654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef5378 00:17:59.211 [2024-11-20 17:03:17.647385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.211 [2024-11-20 17:03:17.647439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:59.211 [2024-11-20 17:03:17.662449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef5be8 00:17:59.211 [2024-11-20 17:03:17.664115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.211 [2024-11-20 17:03:17.664158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:59.211 [2024-11-20 17:03:17.678600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x118cae0) with pdu=0x200016ef6458 00:17:59.211 [2024-11-20 17:03:17.680102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.211 [2024-11-20 17:03:17.680155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:59.211 15181.00 IOPS, 59.30 MiB/s 00:17:59.211 Latency(us) 00:17:59.211 [2024-11-20T17:03:17.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.211 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:59.211 nvme0n1 : 2.01 15205.12 59.39 0.00 0.00 8407.38 5362.04 28001.75 00:17:59.211 [2024-11-20T17:03:17.747Z] =================================================================================================================== 00:17:59.211 [2024-11-20T17:03:17.747Z] Total : 15205.12 59.39 0.00 0.00 8407.38 5362.04 28001.75 00:17:59.211 { 00:17:59.211 "results": [ 00:17:59.211 { 00:17:59.211 "job": "nvme0n1", 00:17:59.211 "core_mask": "0x2", 00:17:59.211 "workload": "randwrite", 00:17:59.211 "status": "finished", 00:17:59.211 "queue_depth": 128, 00:17:59.211 "io_size": 4096, 00:17:59.211 "runtime": 2.005246, 00:17:59.211 "iops": 15205.11697816627, 00:17:59.211 "mibps": 59.39498819596199, 00:17:59.211 "io_failed": 0, 00:17:59.211 "io_timeout": 0, 00:17:59.211 "avg_latency_us": 8407.38177095322, 00:17:59.211 "min_latency_us": 5362.036363636364, 00:17:59.211 "max_latency_us": 28001.745454545453 00:17:59.211 } 00:17:59.211 ], 00:17:59.211 "core_count": 1 00:17:59.211 } 00:17:59.211 17:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:59.211 17:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:59.211 17:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:59.211 17:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:59.211 | .driver_specific 00:17:59.211 | .nvme_error 00:17:59.211 | .status_code 00:17:59.211 | .command_transient_transport_error' 00:17:59.471 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 119 > 0 )) 00:17:59.471 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80474 00:17:59.471 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80474 ']' 00:17:59.471 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80474 00:17:59.730 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:17:59.730 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.730 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80474 00:17:59.730 killing process with pid 80474 00:17:59.730 Received shutdown signal, test time was about 2.000000 seconds 00:17:59.730 00:17:59.730 Latency(us) 00:17:59.730 [2024-11-20T17:03:18.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.730 [2024-11-20T17:03:18.266Z] =================================================================================================================== 00:17:59.730 [2024-11-20T17:03:18.266Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:59.730 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:59.730 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:59.730 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80474' 00:17:59.730 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80474 00:17:59.730 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80474 00:17:59.731 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:17:59.731 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:59.731 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:59.731 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:59.731 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:59.731 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80527 00:17:59.731 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80527 /var/tmp/bperf.sock 00:17:59.731 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:59.731 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80527 ']' 00:17:59.731 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:59.731 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.731 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:59.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:59.731 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.731 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:59.989 [2024-11-20 17:03:18.288372] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:17:59.989 [2024-11-20 17:03:18.288692] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80527 ] 00:17:59.989 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:59.989 Zero copy mechanism will not be used. 00:17:59.989 [2024-11-20 17:03:18.435785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.989 [2024-11-20 17:03:18.495967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.248 [2024-11-20 17:03:18.550196] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:00.248 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.248 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:00.248 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:00.248 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:00.506 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:00.506 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.506 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:00.506 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.506 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:00.506 17:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:00.764 nvme0n1 00:18:00.764 17:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:00.764 17:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.764 17:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:00.764 17:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.764 17:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:00.764 17:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:01.024 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:01.024 Zero copy mechanism will not be used. 00:18:01.024 Running I/O for 2 seconds... 00:18:01.024 [2024-11-20 17:03:19.332514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.024 [2024-11-20 17:03:19.332649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.024 [2024-11-20 17:03:19.332699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.024 [2024-11-20 17:03:19.337981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.024 [2024-11-20 17:03:19.338051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.024 [2024-11-20 17:03:19.338074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.024 [2024-11-20 17:03:19.342922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.024 [2024-11-20 17:03:19.343003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.024 [2024-11-20 17:03:19.343025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.024 [2024-11-20 17:03:19.347749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.024 [2024-11-20 17:03:19.348039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.024 [2024-11-20 17:03:19.348063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.024 [2024-11-20 17:03:19.352953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.024 [2024-11-20 17:03:19.353035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.024 [2024-11-20 17:03:19.353056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.024 [2024-11-20 17:03:19.357685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.024 [2024-11-20 17:03:19.357766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.024 [2024-11-20 17:03:19.357788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.024 [2024-11-20 17:03:19.362561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.024 [2024-11-20 17:03:19.362639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.024 [2024-11-20 17:03:19.362670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.024 [2024-11-20 17:03:19.367313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.024 [2024-11-20 17:03:19.367576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.024 [2024-11-20 17:03:19.367598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.024 [2024-11-20 17:03:19.372345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.024 [2024-11-20 17:03:19.372613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.024 [2024-11-20 17:03:19.372890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.024 [2024-11-20 17:03:19.377265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.024 [2024-11-20 17:03:19.377531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.024 [2024-11-20 17:03:19.377705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.024 [2024-11-20 17:03:19.382343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.024 [2024-11-20 17:03:19.382615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.024 [2024-11-20 17:03:19.382778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.024 [2024-11-20 17:03:19.387474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.024 [2024-11-20 17:03:19.387716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.024 [2024-11-20 17:03:19.387897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.024 [2024-11-20 17:03:19.392623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.024 [2024-11-20 17:03:19.392878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.024 [2024-11-20 17:03:19.393155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.024 [2024-11-20 17:03:19.397748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.024 [2024-11-20 17:03:19.398007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.024 [2024-11-20 17:03:19.398174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.024 [2024-11-20 17:03:19.402807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.024 [2024-11-20 17:03:19.402890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.024 [2024-11-20 17:03:19.402912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.024 [2024-11-20 17:03:19.407563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.024 [2024-11-20 17:03:19.407807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.024 [2024-11-20 17:03:19.407830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.024 [2024-11-20 17:03:19.412660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.024 [2024-11-20 17:03:19.412759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.024 [2024-11-20 17:03:19.412785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.024 [2024-11-20 17:03:19.417493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.024 [2024-11-20 17:03:19.417575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.024 [2024-11-20 17:03:19.417595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.024 [2024-11-20 17:03:19.422255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.024 [2024-11-20 17:03:19.422323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.024 [2024-11-20 17:03:19.422344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.024 [2024-11-20 17:03:19.426982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.427062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.427083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.431731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.432002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.432025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.436939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.437012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.437033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.441641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.441708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.441729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.446471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.446551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.446571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.451258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.451513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.451535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.456417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.456486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.456509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.461198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.461276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.461296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.465942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.466013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.466033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.470628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.470929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.470961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.475711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.475804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.475836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.480499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.480602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.480622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.485335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.485410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.485431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.490052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.490120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.490141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.494741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.494844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.494882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.499537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.499605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.499626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.504316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.504396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.504417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.508987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.509062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.509082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.513702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.513986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.514008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.518803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.518914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.518934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.523631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.523727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.523748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.528419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.528490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.528511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.533239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.533497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.533519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.538268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.538349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.538369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.542971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.543053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.543073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.547576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.547654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.547674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.025 [2024-11-20 17:03:19.552354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.025 [2024-11-20 17:03:19.552617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.025 [2024-11-20 17:03:19.552639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.285 [2024-11-20 17:03:19.557484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.285 [2024-11-20 17:03:19.557556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.285 [2024-11-20 17:03:19.557577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.285 [2024-11-20 17:03:19.562237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.285 [2024-11-20 17:03:19.562306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.285 [2024-11-20 17:03:19.562326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.285 [2024-11-20 17:03:19.566975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.285 [2024-11-20 17:03:19.567062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.285 [2024-11-20 17:03:19.567082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.285 [2024-11-20 17:03:19.571680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.285 [2024-11-20 17:03:19.571760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.285 [2024-11-20 17:03:19.571780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.285 [2024-11-20 17:03:19.576495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.576768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.286 [2024-11-20 17:03:19.576797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.286 [2024-11-20 17:03:19.581625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.581695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.286 [2024-11-20 17:03:19.581716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.286 [2024-11-20 17:03:19.586537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.586633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.286 [2024-11-20 17:03:19.586654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.286 [2024-11-20 17:03:19.591390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.591470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.286 [2024-11-20 17:03:19.591490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.286 [2024-11-20 17:03:19.596511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.596833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.286 [2024-11-20 17:03:19.596872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.286 [2024-11-20 17:03:19.602195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.602311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.286 [2024-11-20 17:03:19.602334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.286 [2024-11-20 17:03:19.607442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.607527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.286 [2024-11-20 17:03:19.607549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.286 [2024-11-20 17:03:19.612747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.613037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.286 [2024-11-20 17:03:19.613059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.286 [2024-11-20 17:03:19.618197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.618321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.286 [2024-11-20 17:03:19.618343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.286 [2024-11-20 17:03:19.623481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.623553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.286 [2024-11-20 17:03:19.623575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.286 [2024-11-20 17:03:19.628634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.628897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.286 [2024-11-20 17:03:19.628920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.286 [2024-11-20 17:03:19.633807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.633904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.286 [2024-11-20 17:03:19.633929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.286 [2024-11-20 17:03:19.638594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.638659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.286 [2024-11-20 17:03:19.638681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.286 [2024-11-20 17:03:19.643670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.643740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.286 [2024-11-20 17:03:19.643761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.286 [2024-11-20 17:03:19.649130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.649205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.286 [2024-11-20 17:03:19.649243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.286 [2024-11-20 17:03:19.654483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.654579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.286 [2024-11-20 17:03:19.654618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.286 [2024-11-20 17:03:19.659789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.659885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.286 [2024-11-20 17:03:19.659907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.286 [2024-11-20 17:03:19.665013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.665102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.286 [2024-11-20 17:03:19.665128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.286 [2024-11-20 17:03:19.670177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.670246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.286 [2024-11-20 17:03:19.670284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.286 [2024-11-20 17:03:19.675225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.675305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.286 [2024-11-20 17:03:19.675325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.286 [2024-11-20 17:03:19.679978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.680051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.286 [2024-11-20 17:03:19.680072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.286 [2024-11-20 17:03:19.684796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.685076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.286 [2024-11-20 17:03:19.685098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.286 [2024-11-20 17:03:19.689848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.689926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.286 [2024-11-20 17:03:19.689947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.286 [2024-11-20 17:03:19.694595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.694665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.286 [2024-11-20 17:03:19.694686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.286 [2024-11-20 17:03:19.699431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.286 [2024-11-20 17:03:19.699523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.699543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.287 [2024-11-20 17:03:19.704175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.287 [2024-11-20 17:03:19.704273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.704294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.287 [2024-11-20 17:03:19.708905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.287 [2024-11-20 17:03:19.708980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.709001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.287 [2024-11-20 17:03:19.713534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.287 [2024-11-20 17:03:19.713634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.713654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.287 [2024-11-20 17:03:19.718333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.287 [2024-11-20 17:03:19.718420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.718441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.287 [2024-11-20 17:03:19.723078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.287 [2024-11-20 17:03:19.723169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.723189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.287 [2024-11-20 17:03:19.727761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.287 [2024-11-20 17:03:19.727870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.727891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.287 [2024-11-20 17:03:19.732565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.287 [2024-11-20 17:03:19.732832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.732854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.287 [2024-11-20 17:03:19.737618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.287 [2024-11-20 17:03:19.737686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.737706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.287 [2024-11-20 17:03:19.742387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.287 [2024-11-20 17:03:19.742454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.742475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.287 [2024-11-20 17:03:19.747126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.287 [2024-11-20 17:03:19.747215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.747235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.287 [2024-11-20 17:03:19.752109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.287 [2024-11-20 17:03:19.752190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.752211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.287 [2024-11-20 17:03:19.757214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.287 [2024-11-20 17:03:19.757285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.757307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.287 [2024-11-20 17:03:19.762638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.287 [2024-11-20 17:03:19.762751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.762772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.287 [2024-11-20 17:03:19.768358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.287 [2024-11-20 17:03:19.768446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.768469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.287 [2024-11-20 17:03:19.773604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.287 [2024-11-20 17:03:19.773722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.773743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.287 [2024-11-20 17:03:19.778957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.287 [2024-11-20 17:03:19.779059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.779084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.287 [2024-11-20 17:03:19.784085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.287 [2024-11-20 17:03:19.784172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.784197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.287 [2024-11-20 17:03:19.789126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.287 [2024-11-20 17:03:19.789194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.789215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.287 [2024-11-20 17:03:19.794241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.287 [2024-11-20 17:03:19.794345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.794366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.287 [2024-11-20 17:03:19.799294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.287 [2024-11-20 17:03:19.799531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.799552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.287 [2024-11-20 17:03:19.804376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.287 [2024-11-20 17:03:19.804452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.804474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.287 [2024-11-20 17:03:19.809220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.287 [2024-11-20 17:03:19.809299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.809319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.287 [2024-11-20 17:03:19.814006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.287 [2024-11-20 17:03:19.814074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.814095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.287 [2024-11-20 17:03:19.818675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.287 [2024-11-20 17:03:19.818743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.287 [2024-11-20 17:03:19.818763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.548 [2024-11-20 17:03:19.823503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.548 [2024-11-20 17:03:19.823756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.548 [2024-11-20 17:03:19.823777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.548 [2024-11-20 17:03:19.828717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.548 [2024-11-20 17:03:19.828794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.548 [2024-11-20 17:03:19.828814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.548 [2024-11-20 17:03:19.833494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.548 [2024-11-20 17:03:19.833574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.548 [2024-11-20 17:03:19.833595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.548 [2024-11-20 17:03:19.838404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.548 [2024-11-20 17:03:19.838503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.548 [2024-11-20 17:03:19.838525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.548 [2024-11-20 17:03:19.843428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.548 [2024-11-20 17:03:19.843685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.548 [2024-11-20 17:03:19.843707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.548 [2024-11-20 17:03:19.848945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.548 [2024-11-20 17:03:19.849037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.548 [2024-11-20 17:03:19.849060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.548 [2024-11-20 17:03:19.854277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.548 [2024-11-20 17:03:19.854382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.548 [2024-11-20 17:03:19.854411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.548 [2024-11-20 17:03:19.859683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.548 [2024-11-20 17:03:19.859987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.548 [2024-11-20 17:03:19.860015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.548 [2024-11-20 17:03:19.865207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.548 [2024-11-20 17:03:19.865325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.548 [2024-11-20 17:03:19.865348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.548 [2024-11-20 17:03:19.870326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.548 [2024-11-20 17:03:19.870396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.548 [2024-11-20 17:03:19.870418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.548 [2024-11-20 17:03:19.875589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.548 [2024-11-20 17:03:19.875880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.548 [2024-11-20 17:03:19.875903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.548 [2024-11-20 17:03:19.880985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.548 [2024-11-20 17:03:19.881063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.548 [2024-11-20 17:03:19.881084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.548 [2024-11-20 17:03:19.885918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.548 [2024-11-20 17:03:19.886000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.548 [2024-11-20 17:03:19.886020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.548 [2024-11-20 17:03:19.891062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.548 [2024-11-20 17:03:19.891134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.548 [2024-11-20 17:03:19.891155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.548 [2024-11-20 17:03:19.895955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:19.896045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:19.896066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.549 [2024-11-20 17:03:19.900959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:19.901061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:19.901082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.549 [2024-11-20 17:03:19.906235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:19.906311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:19.906334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.549 [2024-11-20 17:03:19.911569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:19.911685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:19.911706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.549 [2024-11-20 17:03:19.917013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:19.917095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:19.917117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.549 [2024-11-20 17:03:19.922012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:19.922086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:19.922107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.549 [2024-11-20 17:03:19.926933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:19.927010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:19.927031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.549 [2024-11-20 17:03:19.931902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:19.931979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:19.932000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.549 [2024-11-20 17:03:19.936784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:19.936919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:19.936967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.549 [2024-11-20 17:03:19.941928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:19.942010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:19.942030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.549 [2024-11-20 17:03:19.946784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:19.946898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:19.946921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.549 [2024-11-20 17:03:19.951955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:19.952036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:19.952059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.549 [2024-11-20 17:03:19.957184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:19.957257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:19.957278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.549 [2024-11-20 17:03:19.962515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:19.962591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:19.962612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.549 [2024-11-20 17:03:19.967910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:19.968009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:19.968032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.549 [2024-11-20 17:03:19.973219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:19.973292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:19.973313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.549 [2024-11-20 17:03:19.978699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:19.978783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:19.978805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.549 [2024-11-20 17:03:19.984063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:19.984138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:19.984161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.549 [2024-11-20 17:03:19.989505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:19.989579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:19.989601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.549 [2024-11-20 17:03:19.994859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:19.994934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:19.994956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.549 [2024-11-20 17:03:20.000187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:20.000305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:20.000329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.549 [2024-11-20 17:03:20.005705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:20.005804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:20.005825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.549 [2024-11-20 17:03:20.010959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:20.011043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:20.011065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.549 [2024-11-20 17:03:20.016324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:20.016557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:20.016579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.549 [2024-11-20 17:03:20.021431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.549 [2024-11-20 17:03:20.021519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.549 [2024-11-20 17:03:20.021540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.550 [2024-11-20 17:03:20.026209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.550 [2024-11-20 17:03:20.026298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.550 [2024-11-20 17:03:20.026319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.550 [2024-11-20 17:03:20.030956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.550 [2024-11-20 17:03:20.031027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.550 [2024-11-20 17:03:20.031048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.550 [2024-11-20 17:03:20.036047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.550 [2024-11-20 17:03:20.036117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.550 [2024-11-20 17:03:20.036139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.550 [2024-11-20 17:03:20.041015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.550 [2024-11-20 17:03:20.041097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.550 [2024-11-20 17:03:20.041117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.550 [2024-11-20 17:03:20.045736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.550 [2024-11-20 17:03:20.045805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.550 [2024-11-20 17:03:20.045839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.550 [2024-11-20 17:03:20.050637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.550 [2024-11-20 17:03:20.050716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.550 [2024-11-20 17:03:20.050738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.550 [2024-11-20 17:03:20.055614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.550 [2024-11-20 17:03:20.055866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.550 [2024-11-20 17:03:20.055901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.550 [2024-11-20 17:03:20.060936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.550 [2024-11-20 17:03:20.061007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.550 [2024-11-20 17:03:20.061028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.550 [2024-11-20 17:03:20.065756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.550 [2024-11-20 17:03:20.065871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.550 [2024-11-20 17:03:20.065893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.550 [2024-11-20 17:03:20.070717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.550 [2024-11-20 17:03:20.070798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.550 [2024-11-20 17:03:20.070818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.550 [2024-11-20 17:03:20.075751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.550 [2024-11-20 17:03:20.076017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.550 [2024-11-20 17:03:20.076039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.550 [2024-11-20 17:03:20.080987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.810 [2024-11-20 17:03:20.081239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.810 [2024-11-20 17:03:20.081464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.810 [2024-11-20 17:03:20.086057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.810 [2024-11-20 17:03:20.086281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.810 [2024-11-20 17:03:20.086529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.810 [2024-11-20 17:03:20.091193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.810 [2024-11-20 17:03:20.091454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.810 [2024-11-20 17:03:20.091660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.810 [2024-11-20 17:03:20.096472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.810 [2024-11-20 17:03:20.096758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.810 [2024-11-20 17:03:20.096968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.810 [2024-11-20 17:03:20.101647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.810 [2024-11-20 17:03:20.101903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.810 [2024-11-20 17:03:20.102126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.810 [2024-11-20 17:03:20.106767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.810 [2024-11-20 17:03:20.107097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.810 [2024-11-20 17:03:20.107277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.810 [2024-11-20 17:03:20.111919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.810 [2024-11-20 17:03:20.112143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.810 [2024-11-20 17:03:20.112404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.810 [2024-11-20 17:03:20.117226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.810 [2024-11-20 17:03:20.117461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.810 [2024-11-20 17:03:20.117626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.810 [2024-11-20 17:03:20.122464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.811 [2024-11-20 17:03:20.122722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.811 [2024-11-20 17:03:20.122744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.811 [2024-11-20 17:03:20.127576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.811 [2024-11-20 17:03:20.127644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.811 [2024-11-20 17:03:20.127666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.811 [2024-11-20 17:03:20.132412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.811 [2024-11-20 17:03:20.132494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.811 [2024-11-20 17:03:20.132515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.811 [2024-11-20 17:03:20.137443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.811 [2024-11-20 17:03:20.137521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.811 [2024-11-20 17:03:20.137542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.811 [2024-11-20 17:03:20.142245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.811 [2024-11-20 17:03:20.142321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.811 [2024-11-20 17:03:20.142344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.811 [2024-11-20 17:03:20.147175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.811 [2024-11-20 17:03:20.147257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.811 [2024-11-20 17:03:20.147278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.811 [2024-11-20 17:03:20.151984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.811 [2024-11-20 17:03:20.152057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.811 [2024-11-20 17:03:20.152078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.811 [2024-11-20 17:03:20.157293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.811 [2024-11-20 17:03:20.157381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.811 [2024-11-20 17:03:20.157402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.811 [2024-11-20 17:03:20.162327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.811 [2024-11-20 17:03:20.162560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.811 [2024-11-20 17:03:20.162583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.811 [2024-11-20 17:03:20.167614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.811 [2024-11-20 17:03:20.167716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.811 [2024-11-20 17:03:20.167737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.811 [2024-11-20 17:03:20.172849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.811 [2024-11-20 17:03:20.172965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.811 [2024-11-20 17:03:20.172989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.811 [2024-11-20 17:03:20.178072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.811 [2024-11-20 17:03:20.178143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.811 [2024-11-20 17:03:20.178179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.811 [2024-11-20 17:03:20.183148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.811 [2024-11-20 17:03:20.183231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.811 [2024-11-20 17:03:20.183252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.811 [2024-11-20 17:03:20.188062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.811 [2024-11-20 17:03:20.188130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.811 [2024-11-20 17:03:20.188150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.811 [2024-11-20 17:03:20.193029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.811 [2024-11-20 17:03:20.193102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.811 [2024-11-20 17:03:20.193122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.811 [2024-11-20 17:03:20.197934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.811 [2024-11-20 17:03:20.198018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.811 [2024-11-20 17:03:20.198038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.811 [2024-11-20 17:03:20.202747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.811 [2024-11-20 17:03:20.202838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.811 [2024-11-20 17:03:20.202875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.811 [2024-11-20 17:03:20.207643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.811 [2024-11-20 17:03:20.207724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.811 [2024-11-20 17:03:20.207744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.811 [2024-11-20 17:03:20.212527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.811 [2024-11-20 17:03:20.212815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.811 [2024-11-20 17:03:20.212837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.812 [2024-11-20 17:03:20.217834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.812 [2024-11-20 17:03:20.217929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.812 [2024-11-20 17:03:20.217950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.812 [2024-11-20 17:03:20.222634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.812 [2024-11-20 17:03:20.222714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.812 [2024-11-20 17:03:20.222734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.812 [2024-11-20 17:03:20.227507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.812 [2024-11-20 17:03:20.227595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.812 [2024-11-20 17:03:20.227615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.812 [2024-11-20 17:03:20.232298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.812 [2024-11-20 17:03:20.232535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.812 [2024-11-20 17:03:20.232572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.812 [2024-11-20 17:03:20.237438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.812 [2024-11-20 17:03:20.237518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.812 [2024-11-20 17:03:20.237538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.812 [2024-11-20 17:03:20.242283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.812 [2024-11-20 17:03:20.242363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.812 [2024-11-20 17:03:20.242383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.812 [2024-11-20 17:03:20.247035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.812 [2024-11-20 17:03:20.247104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.812 [2024-11-20 17:03:20.247124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.812 [2024-11-20 17:03:20.251759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.812 [2024-11-20 17:03:20.251882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.812 [2024-11-20 17:03:20.251903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.812 [2024-11-20 17:03:20.256725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.812 [2024-11-20 17:03:20.256998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.812 [2024-11-20 17:03:20.257025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.812 [2024-11-20 17:03:20.261894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.812 [2024-11-20 17:03:20.261962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.812 [2024-11-20 17:03:20.261983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.812 [2024-11-20 17:03:20.266730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.812 [2024-11-20 17:03:20.266815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.812 [2024-11-20 17:03:20.266863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.812 [2024-11-20 17:03:20.271634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.812 [2024-11-20 17:03:20.271723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.812 [2024-11-20 17:03:20.271743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.812 [2024-11-20 17:03:20.276475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.812 [2024-11-20 17:03:20.276751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.812 [2024-11-20 17:03:20.276773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.812 [2024-11-20 17:03:20.281717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.812 [2024-11-20 17:03:20.281784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.812 [2024-11-20 17:03:20.281805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.812 [2024-11-20 17:03:20.286630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.812 [2024-11-20 17:03:20.286709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.812 [2024-11-20 17:03:20.286729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.812 [2024-11-20 17:03:20.291479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.812 [2024-11-20 17:03:20.291558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.812 [2024-11-20 17:03:20.291578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.812 [2024-11-20 17:03:20.296276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.812 [2024-11-20 17:03:20.296527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.812 [2024-11-20 17:03:20.296549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.812 [2024-11-20 17:03:20.301463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.812 [2024-11-20 17:03:20.301533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.812 [2024-11-20 17:03:20.301554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.812 [2024-11-20 17:03:20.306287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.812 [2024-11-20 17:03:20.306356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.812 [2024-11-20 17:03:20.306376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.812 [2024-11-20 17:03:20.311068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.813 [2024-11-20 17:03:20.311148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.813 [2024-11-20 17:03:20.311169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.813 [2024-11-20 17:03:20.315806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.813 [2024-11-20 17:03:20.315928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.813 [2024-11-20 17:03:20.315964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.813 [2024-11-20 17:03:20.320751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.813 [2024-11-20 17:03:20.321004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.813 [2024-11-20 17:03:20.321025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.813 [2024-11-20 17:03:20.325814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.813 [2024-11-20 17:03:20.325894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.813 [2024-11-20 17:03:20.325914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.813 6168.00 IOPS, 771.00 MiB/s [2024-11-20T17:03:20.349Z] [2024-11-20 17:03:20.331977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.813 [2024-11-20 17:03:20.332095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.813 [2024-11-20 17:03:20.332116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.813 [2024-11-20 17:03:20.336800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.813 [2024-11-20 17:03:20.337096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.813 [2024-11-20 17:03:20.337119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.813 [2024-11-20 17:03:20.341986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:01.813 [2024-11-20 17:03:20.342077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.813 [2024-11-20 17:03:20.342097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.073 [2024-11-20 17:03:20.346713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.073 [2024-11-20 17:03:20.346783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.073 [2024-11-20 17:03:20.346803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.073 [2024-11-20 17:03:20.351595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.073 [2024-11-20 17:03:20.351664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.073 [2024-11-20 17:03:20.351684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.073 [2024-11-20 17:03:20.356392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.073 [2024-11-20 17:03:20.356662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.073 [2024-11-20 17:03:20.356683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.073 [2024-11-20 17:03:20.361558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.073 [2024-11-20 17:03:20.361639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.073 [2024-11-20 17:03:20.361674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.073 [2024-11-20 17:03:20.366405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.073 [2024-11-20 17:03:20.366496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.073 [2024-11-20 17:03:20.366517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.073 [2024-11-20 17:03:20.371262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.073 [2024-11-20 17:03:20.371330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.073 [2024-11-20 17:03:20.371351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.073 [2024-11-20 17:03:20.376011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.073 [2024-11-20 17:03:20.376099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.073 [2024-11-20 17:03:20.376119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.073 [2024-11-20 17:03:20.381102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.073 [2024-11-20 17:03:20.381194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.073 [2024-11-20 17:03:20.381214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.073 [2024-11-20 17:03:20.386226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.073 [2024-11-20 17:03:20.386330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.073 [2024-11-20 17:03:20.386352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.073 [2024-11-20 17:03:20.391441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.073 [2024-11-20 17:03:20.391695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.073 [2024-11-20 17:03:20.391718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.073 [2024-11-20 17:03:20.396791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.073 [2024-11-20 17:03:20.396870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.073 [2024-11-20 17:03:20.396905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.073 [2024-11-20 17:03:20.401745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.073 [2024-11-20 17:03:20.401825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.073 [2024-11-20 17:03:20.401863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.073 [2024-11-20 17:03:20.406609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.073 [2024-11-20 17:03:20.406680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.073 [2024-11-20 17:03:20.406701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.411531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.074 [2024-11-20 17:03:20.411798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.074 [2024-11-20 17:03:20.411820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.416793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.074 [2024-11-20 17:03:20.416905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.074 [2024-11-20 17:03:20.416926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.421677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.074 [2024-11-20 17:03:20.421763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.074 [2024-11-20 17:03:20.421784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.426570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.074 [2024-11-20 17:03:20.426648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.074 [2024-11-20 17:03:20.426668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.431405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.074 [2024-11-20 17:03:20.431641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.074 [2024-11-20 17:03:20.431663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.436516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.074 [2024-11-20 17:03:20.436599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.074 [2024-11-20 17:03:20.436619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.441368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.074 [2024-11-20 17:03:20.441443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.074 [2024-11-20 17:03:20.441463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.446107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.074 [2024-11-20 17:03:20.446176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.074 [2024-11-20 17:03:20.446196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.450936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.074 [2024-11-20 17:03:20.451005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.074 [2024-11-20 17:03:20.451025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.455738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.074 [2024-11-20 17:03:20.456017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.074 [2024-11-20 17:03:20.456039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.460789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.074 [2024-11-20 17:03:20.460899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.074 [2024-11-20 17:03:20.460920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.465536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.074 [2024-11-20 17:03:20.465625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.074 [2024-11-20 17:03:20.465645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.470387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.074 [2024-11-20 17:03:20.470467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.074 [2024-11-20 17:03:20.470488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.475193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.074 [2024-11-20 17:03:20.475287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.074 [2024-11-20 17:03:20.475308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.479974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.074 [2024-11-20 17:03:20.480055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.074 [2024-11-20 17:03:20.480075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.484754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.074 [2024-11-20 17:03:20.484831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.074 [2024-11-20 17:03:20.484882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.489581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.074 [2024-11-20 17:03:20.489659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.074 [2024-11-20 17:03:20.489680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.494435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.074 [2024-11-20 17:03:20.494513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.074 [2024-11-20 17:03:20.494534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.499252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.074 [2024-11-20 17:03:20.499504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.074 [2024-11-20 17:03:20.499526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.504424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.074 [2024-11-20 17:03:20.504508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.074 [2024-11-20 17:03:20.504530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.509279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.074 [2024-11-20 17:03:20.509347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.074 [2024-11-20 17:03:20.509367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.514051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.074 [2024-11-20 17:03:20.514121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.074 [2024-11-20 17:03:20.514141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.518839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.074 [2024-11-20 17:03:20.518922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.074 [2024-11-20 17:03:20.518943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.523730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.074 [2024-11-20 17:03:20.524011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.074 [2024-11-20 17:03:20.524033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.074 [2024-11-20 17:03:20.528859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.075 [2024-11-20 17:03:20.528962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.075 [2024-11-20 17:03:20.528982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.075 [2024-11-20 17:03:20.533598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.075 [2024-11-20 17:03:20.533676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.075 [2024-11-20 17:03:20.533696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.075 [2024-11-20 17:03:20.538429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.075 [2024-11-20 17:03:20.538508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.075 [2024-11-20 17:03:20.538528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.075 [2024-11-20 17:03:20.543211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.075 [2024-11-20 17:03:20.543291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.075 [2024-11-20 17:03:20.543310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.075 [2024-11-20 17:03:20.547949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.075 [2024-11-20 17:03:20.548028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.075 [2024-11-20 17:03:20.548048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.075 [2024-11-20 17:03:20.552766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.075 [2024-11-20 17:03:20.552860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.075 [2024-11-20 17:03:20.552881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.075 [2024-11-20 17:03:20.557571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.075 [2024-11-20 17:03:20.557656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.075 [2024-11-20 17:03:20.557677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.075 [2024-11-20 17:03:20.562376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.075 [2024-11-20 17:03:20.562458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.075 [2024-11-20 17:03:20.562478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.075 [2024-11-20 17:03:20.567127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.075 [2024-11-20 17:03:20.567207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.075 [2024-11-20 17:03:20.567227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.075 [2024-11-20 17:03:20.571894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.075 [2024-11-20 17:03:20.571959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.075 [2024-11-20 17:03:20.571980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.075 [2024-11-20 17:03:20.576705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.075 [2024-11-20 17:03:20.576775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.075 [2024-11-20 17:03:20.576796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.075 [2024-11-20 17:03:20.581593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.075 [2024-11-20 17:03:20.581672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.075 [2024-11-20 17:03:20.581692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.075 [2024-11-20 17:03:20.586378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.075 [2024-11-20 17:03:20.586633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.075 [2024-11-20 17:03:20.586669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.075 [2024-11-20 17:03:20.591367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.075 [2024-11-20 17:03:20.591435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.075 [2024-11-20 17:03:20.591456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.075 [2024-11-20 17:03:20.596147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.075 [2024-11-20 17:03:20.596213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.075 [2024-11-20 17:03:20.596261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.075 [2024-11-20 17:03:20.600825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.075 [2024-11-20 17:03:20.600963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.075 [2024-11-20 17:03:20.600986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.075 [2024-11-20 17:03:20.605616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.075 [2024-11-20 17:03:20.605696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.075 [2024-11-20 17:03:20.605716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.338 [2024-11-20 17:03:20.611011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.338 [2024-11-20 17:03:20.611090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.338 [2024-11-20 17:03:20.611113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.338 [2024-11-20 17:03:20.616201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.338 [2024-11-20 17:03:20.616290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.338 [2024-11-20 17:03:20.616313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.338 [2024-11-20 17:03:20.621644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.338 [2024-11-20 17:03:20.621754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.338 [2024-11-20 17:03:20.621775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.338 [2024-11-20 17:03:20.627010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.338 [2024-11-20 17:03:20.627087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.338 [2024-11-20 17:03:20.627107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.338 [2024-11-20 17:03:20.632465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.338 [2024-11-20 17:03:20.632539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.338 [2024-11-20 17:03:20.632593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.338 [2024-11-20 17:03:20.637863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.338 [2024-11-20 17:03:20.637951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.338 [2024-11-20 17:03:20.637974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.338 [2024-11-20 17:03:20.643304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.338 [2024-11-20 17:03:20.643404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.338 [2024-11-20 17:03:20.643426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.338 [2024-11-20 17:03:20.648681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.338 [2024-11-20 17:03:20.648751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.338 [2024-11-20 17:03:20.648772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.338 [2024-11-20 17:03:20.653968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.338 [2024-11-20 17:03:20.654048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.338 [2024-11-20 17:03:20.654069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.338 [2024-11-20 17:03:20.658903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.338 [2024-11-20 17:03:20.659000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.339 [2024-11-20 17:03:20.659019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.339 [2024-11-20 17:03:20.663748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.339 [2024-11-20 17:03:20.663841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.339 [2024-11-20 17:03:20.663879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.339 [2024-11-20 17:03:20.668698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.339 [2024-11-20 17:03:20.668778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.339 [2024-11-20 17:03:20.668798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.339 [2024-11-20 17:03:20.673584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.339 [2024-11-20 17:03:20.673663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.339 [2024-11-20 17:03:20.673683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.339 [2024-11-20 17:03:20.677962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.339 [2024-11-20 17:03:20.678110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.339 [2024-11-20 17:03:20.678137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.339 [2024-11-20 17:03:20.682609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.339 [2024-11-20 17:03:20.683115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.339 [2024-11-20 17:03:20.683163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.339 [2024-11-20 17:03:20.687772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.339 [2024-11-20 17:03:20.688094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.339 [2024-11-20 17:03:20.688177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.339 [2024-11-20 17:03:20.692872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.339 [2024-11-20 17:03:20.693157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.339 [2024-11-20 17:03:20.693222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.339 [2024-11-20 17:03:20.697641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.339 [2024-11-20 17:03:20.697970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.339 [2024-11-20 17:03:20.698007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.339 [2024-11-20 17:03:20.702629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.339 [2024-11-20 17:03:20.703122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.339 [2024-11-20 17:03:20.703171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.339 [2024-11-20 17:03:20.707670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.339 [2024-11-20 17:03:20.707975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.339 [2024-11-20 17:03:20.708020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.339 [2024-11-20 17:03:20.712677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.339 [2024-11-20 17:03:20.713001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.339 [2024-11-20 17:03:20.713030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.339 [2024-11-20 17:03:20.717597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.339 [2024-11-20 17:03:20.717927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.339 [2024-11-20 17:03:20.717958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.339 [2024-11-20 17:03:20.722457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.339 [2024-11-20 17:03:20.722934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.339 [2024-11-20 17:03:20.722981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.339 [2024-11-20 17:03:20.727486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.339 [2024-11-20 17:03:20.727790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.339 [2024-11-20 17:03:20.727817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.339 [2024-11-20 17:03:20.732494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.339 [2024-11-20 17:03:20.732870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.339 [2024-11-20 17:03:20.732912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.339 [2024-11-20 17:03:20.737415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.339 [2024-11-20 17:03:20.737726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.339 [2024-11-20 17:03:20.737763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.339 [2024-11-20 17:03:20.742406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.339 [2024-11-20 17:03:20.742870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.339 [2024-11-20 17:03:20.742914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.339 [2024-11-20 17:03:20.747438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.339 [2024-11-20 17:03:20.747749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.339 [2024-11-20 17:03:20.747838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.340 [2024-11-20 17:03:20.752507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.340 [2024-11-20 17:03:20.752851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.340 [2024-11-20 17:03:20.752891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.340 [2024-11-20 17:03:20.757464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.340 [2024-11-20 17:03:20.757764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.340 [2024-11-20 17:03:20.757802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.340 [2024-11-20 17:03:20.762625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.340 [2024-11-20 17:03:20.763127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.340 [2024-11-20 17:03:20.763175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.340 [2024-11-20 17:03:20.767805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.340 [2024-11-20 17:03:20.768145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.340 [2024-11-20 17:03:20.768209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.340 [2024-11-20 17:03:20.773234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.340 [2024-11-20 17:03:20.773590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.340 [2024-11-20 17:03:20.773631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.340 [2024-11-20 17:03:20.778561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.340 [2024-11-20 17:03:20.779057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.340 [2024-11-20 17:03:20.779090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.340 [2024-11-20 17:03:20.784325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.340 [2024-11-20 17:03:20.784670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.340 [2024-11-20 17:03:20.784711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.340 [2024-11-20 17:03:20.790017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.340 [2024-11-20 17:03:20.790363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.340 [2024-11-20 17:03:20.790402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.340 [2024-11-20 17:03:20.795558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.340 [2024-11-20 17:03:20.795932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.340 [2024-11-20 17:03:20.795972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.340 [2024-11-20 17:03:20.800929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.340 [2024-11-20 17:03:20.801211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.340 [2024-11-20 17:03:20.801254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.340 [2024-11-20 17:03:20.806225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.340 [2024-11-20 17:03:20.806549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.340 [2024-11-20 17:03:20.806581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.340 [2024-11-20 17:03:20.811559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.340 [2024-11-20 17:03:20.811893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.340 [2024-11-20 17:03:20.811948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.340 [2024-11-20 17:03:20.816749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.340 [2024-11-20 17:03:20.817085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.340 [2024-11-20 17:03:20.817123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.340 [2024-11-20 17:03:20.821705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.340 [2024-11-20 17:03:20.822162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.340 [2024-11-20 17:03:20.822194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.340 [2024-11-20 17:03:20.826735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.340 [2024-11-20 17:03:20.827078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.340 [2024-11-20 17:03:20.827108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.340 [2024-11-20 17:03:20.831805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.340 [2024-11-20 17:03:20.832103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.340 [2024-11-20 17:03:20.832140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.340 [2024-11-20 17:03:20.836860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.340 [2024-11-20 17:03:20.837187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.340 [2024-11-20 17:03:20.837237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.340 [2024-11-20 17:03:20.841837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.340 [2024-11-20 17:03:20.842128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.340 [2024-11-20 17:03:20.842171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.340 [2024-11-20 17:03:20.846671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.340 [2024-11-20 17:03:20.847027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.340 [2024-11-20 17:03:20.847059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.340 [2024-11-20 17:03:20.851876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.341 [2024-11-20 17:03:20.852189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.341 [2024-11-20 17:03:20.852226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.341 [2024-11-20 17:03:20.856930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.341 [2024-11-20 17:03:20.857241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.341 [2024-11-20 17:03:20.857278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.341 [2024-11-20 17:03:20.861777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.341 [2024-11-20 17:03:20.862255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.341 [2024-11-20 17:03:20.862303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.341 [2024-11-20 17:03:20.866826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.341 [2024-11-20 17:03:20.867144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.341 [2024-11-20 17:03:20.867173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.619 [2024-11-20 17:03:20.872066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.619 [2024-11-20 17:03:20.872389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.619 [2024-11-20 17:03:20.872428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.619 [2024-11-20 17:03:20.877255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.619 [2024-11-20 17:03:20.877759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.620 [2024-11-20 17:03:20.877806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.620 [2024-11-20 17:03:20.882596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.620 [2024-11-20 17:03:20.882939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.620 [2024-11-20 17:03:20.882975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.620 [2024-11-20 17:03:20.887943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.620 [2024-11-20 17:03:20.888306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.620 [2024-11-20 17:03:20.888345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.620 [2024-11-20 17:03:20.893426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.620 [2024-11-20 17:03:20.893752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.620 [2024-11-20 17:03:20.893792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.620 [2024-11-20 17:03:20.898981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.620 [2024-11-20 17:03:20.899279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.620 [2024-11-20 17:03:20.899317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.620 [2024-11-20 17:03:20.904342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.620 [2024-11-20 17:03:20.904774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.620 [2024-11-20 17:03:20.904807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.620 [2024-11-20 17:03:20.910056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.620 [2024-11-20 17:03:20.910365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.620 [2024-11-20 17:03:20.910450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.620 [2024-11-20 17:03:20.915521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.620 [2024-11-20 17:03:20.915996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.620 [2024-11-20 17:03:20.916028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.620 [2024-11-20 17:03:20.921004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.620 [2024-11-20 17:03:20.921336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.620 [2024-11-20 17:03:20.921420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.620 [2024-11-20 17:03:20.926416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.620 [2024-11-20 17:03:20.926756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.620 [2024-11-20 17:03:20.926795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.620 [2024-11-20 17:03:20.931716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.620 [2024-11-20 17:03:20.932211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.620 [2024-11-20 17:03:20.932296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.620 [2024-11-20 17:03:20.937454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.620 [2024-11-20 17:03:20.937751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.620 [2024-11-20 17:03:20.937777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.620 [2024-11-20 17:03:20.942729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.620 [2024-11-20 17:03:20.943066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.620 [2024-11-20 17:03:20.943146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.620 [2024-11-20 17:03:20.948122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.620 [2024-11-20 17:03:20.948459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.620 [2024-11-20 17:03:20.948499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.620 [2024-11-20 17:03:20.953278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.620 [2024-11-20 17:03:20.953613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.620 [2024-11-20 17:03:20.953667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.620 [2024-11-20 17:03:20.958147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.620 [2024-11-20 17:03:20.958454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.620 [2024-11-20 17:03:20.958532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.620 [2024-11-20 17:03:20.963116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.620 [2024-11-20 17:03:20.963439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.620 [2024-11-20 17:03:20.963477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.620 [2024-11-20 17:03:20.967953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.620 [2024-11-20 17:03:20.968281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.620 [2024-11-20 17:03:20.968320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.620 [2024-11-20 17:03:20.972798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.620 [2024-11-20 17:03:20.973118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.621 [2024-11-20 17:03:20.973147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.621 [2024-11-20 17:03:20.977700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.621 [2024-11-20 17:03:20.978021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.621 [2024-11-20 17:03:20.978057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.621 [2024-11-20 17:03:20.982837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.621 [2024-11-20 17:03:20.983216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.621 [2024-11-20 17:03:20.983250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.621 [2024-11-20 17:03:20.988590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.621 [2024-11-20 17:03:20.988952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.621 [2024-11-20 17:03:20.988992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.621 [2024-11-20 17:03:20.994149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.621 [2024-11-20 17:03:20.994463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.621 [2024-11-20 17:03:20.994502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.621 [2024-11-20 17:03:20.999422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.621 [2024-11-20 17:03:20.999875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.621 [2024-11-20 17:03:20.999904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.621 [2024-11-20 17:03:21.004884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.621 [2024-11-20 17:03:21.005186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.621 [2024-11-20 17:03:21.005226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.621 [2024-11-20 17:03:21.010229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.621 [2024-11-20 17:03:21.010541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.621 [2024-11-20 17:03:21.010581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.621 [2024-11-20 17:03:21.015565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.621 [2024-11-20 17:03:21.016009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.621 [2024-11-20 17:03:21.016044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.621 [2024-11-20 17:03:21.021087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.621 [2024-11-20 17:03:21.021393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.621 [2024-11-20 17:03:21.021423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.621 [2024-11-20 17:03:21.026636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.621 [2024-11-20 17:03:21.026966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.621 [2024-11-20 17:03:21.026995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.621 [2024-11-20 17:03:21.032300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.621 [2024-11-20 17:03:21.032648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.621 [2024-11-20 17:03:21.032732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.621 [2024-11-20 17:03:21.038168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.621 [2024-11-20 17:03:21.038531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.621 [2024-11-20 17:03:21.038570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.621 [2024-11-20 17:03:21.043763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.621 [2024-11-20 17:03:21.044275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.621 [2024-11-20 17:03:21.044308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.621 [2024-11-20 17:03:21.048859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.621 [2024-11-20 17:03:21.049168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.621 [2024-11-20 17:03:21.049204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.621 [2024-11-20 17:03:21.053672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.621 [2024-11-20 17:03:21.053980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.621 [2024-11-20 17:03:21.054016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.621 [2024-11-20 17:03:21.058477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.621 [2024-11-20 17:03:21.058812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.621 [2024-11-20 17:03:21.058865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.621 [2024-11-20 17:03:21.063631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.621 [2024-11-20 17:03:21.064080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.621 [2024-11-20 17:03:21.064111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.621 [2024-11-20 17:03:21.068850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.621 [2024-11-20 17:03:21.069172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.621 [2024-11-20 17:03:21.069219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.621 [2024-11-20 17:03:21.073664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.622 [2024-11-20 17:03:21.073974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.622 [2024-11-20 17:03:21.074010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.622 [2024-11-20 17:03:21.078690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.622 [2024-11-20 17:03:21.079002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.622 [2024-11-20 17:03:21.079039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.622 [2024-11-20 17:03:21.083614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.622 [2024-11-20 17:03:21.084088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.622 [2024-11-20 17:03:21.084119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.622 [2024-11-20 17:03:21.088667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.622 [2024-11-20 17:03:21.088977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.622 [2024-11-20 17:03:21.089012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.622 [2024-11-20 17:03:21.093776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.622 [2024-11-20 17:03:21.094113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.622 [2024-11-20 17:03:21.094161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.622 [2024-11-20 17:03:21.098612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.622 [2024-11-20 17:03:21.098884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.622 [2024-11-20 17:03:21.098943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.622 [2024-11-20 17:03:21.103484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.622 [2024-11-20 17:03:21.103937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.622 [2024-11-20 17:03:21.103974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.622 [2024-11-20 17:03:21.108630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.622 [2024-11-20 17:03:21.108916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.622 [2024-11-20 17:03:21.108943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.622 [2024-11-20 17:03:21.113751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.622 [2024-11-20 17:03:21.114068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.622 [2024-11-20 17:03:21.114099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.622 [2024-11-20 17:03:21.118731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.622 [2024-11-20 17:03:21.119005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.622 [2024-11-20 17:03:21.119032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.622 [2024-11-20 17:03:21.123636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.622 [2024-11-20 17:03:21.124113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.622 [2024-11-20 17:03:21.124146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.622 [2024-11-20 17:03:21.129052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.622 [2024-11-20 17:03:21.129348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.622 [2024-11-20 17:03:21.129385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.622 [2024-11-20 17:03:21.133941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.622 [2024-11-20 17:03:21.134223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.622 [2024-11-20 17:03:21.134316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.622 [2024-11-20 17:03:21.138919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.622 [2024-11-20 17:03:21.139217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.622 [2024-11-20 17:03:21.139254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.622 [2024-11-20 17:03:21.143920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.622 [2024-11-20 17:03:21.144362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.622 [2024-11-20 17:03:21.144394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.622 [2024-11-20 17:03:21.149045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.622 [2024-11-20 17:03:21.149341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.622 [2024-11-20 17:03:21.149419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.882 [2024-11-20 17:03:21.153959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.882 [2024-11-20 17:03:21.154253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.154289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.883 [2024-11-20 17:03:21.159099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.883 [2024-11-20 17:03:21.159412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.159449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.883 [2024-11-20 17:03:21.164135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.883 [2024-11-20 17:03:21.164460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.164544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.883 [2024-11-20 17:03:21.169176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.883 [2024-11-20 17:03:21.169471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.169509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.883 [2024-11-20 17:03:21.174131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.883 [2024-11-20 17:03:21.174426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.174505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.883 [2024-11-20 17:03:21.179283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.883 [2024-11-20 17:03:21.179571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.179609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.883 [2024-11-20 17:03:21.184756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.883 [2024-11-20 17:03:21.185109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.185144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.883 [2024-11-20 17:03:21.190118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.883 [2024-11-20 17:03:21.190462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.190498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.883 [2024-11-20 17:03:21.195592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.883 [2024-11-20 17:03:21.195920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.195969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.883 [2024-11-20 17:03:21.200781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.883 [2024-11-20 17:03:21.201122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.201158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.883 [2024-11-20 17:03:21.206313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.883 [2024-11-20 17:03:21.206595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.206626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.883 [2024-11-20 17:03:21.211668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.883 [2024-11-20 17:03:21.212005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.212038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.883 [2024-11-20 17:03:21.216599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.883 [2024-11-20 17:03:21.216918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.216952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.883 [2024-11-20 17:03:21.221537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.883 [2024-11-20 17:03:21.221846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.221892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.883 [2024-11-20 17:03:21.226616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.883 [2024-11-20 17:03:21.226940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.226974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.883 [2024-11-20 17:03:21.231609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.883 [2024-11-20 17:03:21.231917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.231976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.883 [2024-11-20 17:03:21.236451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.883 [2024-11-20 17:03:21.236735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.236767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.883 [2024-11-20 17:03:21.241254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.883 [2024-11-20 17:03:21.241539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.241571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.883 [2024-11-20 17:03:21.246041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.883 [2024-11-20 17:03:21.246323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.246356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.883 [2024-11-20 17:03:21.250967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.883 [2024-11-20 17:03:21.251249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.251282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.883 [2024-11-20 17:03:21.255804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.883 [2024-11-20 17:03:21.256095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.256127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.883 [2024-11-20 17:03:21.260625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.883 [2024-11-20 17:03:21.260919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.260952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.883 [2024-11-20 17:03:21.265510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.883 [2024-11-20 17:03:21.265794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.265838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.883 [2024-11-20 17:03:21.270529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.883 [2024-11-20 17:03:21.270791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.270844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.883 [2024-11-20 17:03:21.275473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.883 [2024-11-20 17:03:21.275753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.883 [2024-11-20 17:03:21.275780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.884 [2024-11-20 17:03:21.280360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.884 [2024-11-20 17:03:21.280643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.884 [2024-11-20 17:03:21.280682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.884 [2024-11-20 17:03:21.285277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.884 [2024-11-20 17:03:21.285573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.884 [2024-11-20 17:03:21.285606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.884 [2024-11-20 17:03:21.290197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.884 [2024-11-20 17:03:21.290482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.884 [2024-11-20 17:03:21.290515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.884 [2024-11-20 17:03:21.295129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.884 [2024-11-20 17:03:21.295429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.884 [2024-11-20 17:03:21.295463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.884 [2024-11-20 17:03:21.299965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.884 [2024-11-20 17:03:21.300302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.884 [2024-11-20 17:03:21.300336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.884 [2024-11-20 17:03:21.304900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.884 [2024-11-20 17:03:21.305186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.884 [2024-11-20 17:03:21.305220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.884 [2024-11-20 17:03:21.309708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.884 [2024-11-20 17:03:21.310038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.884 [2024-11-20 17:03:21.310074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.884 [2024-11-20 17:03:21.314836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.884 [2024-11-20 17:03:21.315135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.884 [2024-11-20 17:03:21.315169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.884 [2024-11-20 17:03:21.319726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.884 [2024-11-20 17:03:21.320046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.884 [2024-11-20 17:03:21.320080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:02.884 [2024-11-20 17:03:21.324719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.884 [2024-11-20 17:03:21.325044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.884 [2024-11-20 17:03:21.325072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:02.884 [2024-11-20 17:03:21.329679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11795b0) with pdu=0x200016eff3c8 00:18:02.884 [2024-11-20 17:03:21.329974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.884 [2024-11-20 17:03:21.330001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:02.884 6138.00 IOPS, 767.25 MiB/s 00:18:02.884 Latency(us) 00:18:02.884 [2024-11-20T17:03:21.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.884 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:02.884 nvme0n1 : 2.00 6136.63 767.08 0.00 0.00 2601.48 1616.06 6106.76 00:18:02.884 [2024-11-20T17:03:21.420Z] =================================================================================================================== 00:18:02.884 [2024-11-20T17:03:21.420Z] Total : 6136.63 767.08 0.00 0.00 2601.48 1616.06 6106.76 00:18:02.884 { 00:18:02.884 "results": [ 00:18:02.884 { 00:18:02.884 "job": "nvme0n1", 00:18:02.884 "core_mask": "0x2", 00:18:02.884 "workload": "randwrite", 00:18:02.884 "status": "finished", 00:18:02.884 "queue_depth": 16, 00:18:02.884 "io_size": 131072, 00:18:02.884 "runtime": 2.003704, 00:18:02.884 "iops": 6136.634952068769, 00:18:02.884 "mibps": 767.0793690085961, 00:18:02.884 "io_failed": 0, 00:18:02.884 "io_timeout": 0, 00:18:02.884 "avg_latency_us": 2601.4775418465724, 00:18:02.884 "min_latency_us": 1616.0581818181818, 00:18:02.884 "max_latency_us": 6106.763636363637 00:18:02.884 } 00:18:02.884 ], 00:18:02.884 "core_count": 1 00:18:02.884 } 00:18:02.884 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:02.884 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:02.884 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:02.884 | .driver_specific 00:18:02.884 | .nvme_error 00:18:02.884 | .status_code 00:18:02.884 | .command_transient_transport_error' 00:18:02.884 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:03.144 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 397 > 0 )) 00:18:03.144 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80527 00:18:03.144 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80527 ']' 00:18:03.144 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80527 00:18:03.144 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:03.144 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:03.144 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80527 00:18:03.403 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:03.403 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:03.403 killing process with pid 80527 00:18:03.403 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80527' 00:18:03.403 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80527 00:18:03.403 Received shutdown signal, test time was about 2.000000 seconds 00:18:03.403 00:18:03.403 Latency(us) 00:18:03.403 [2024-11-20T17:03:21.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.403 [2024-11-20T17:03:21.939Z] =================================================================================================================== 00:18:03.403 [2024-11-20T17:03:21.939Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:03.403 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80527 00:18:03.403 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80329 00:18:03.403 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80329 ']' 00:18:03.403 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80329 00:18:03.403 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:03.403 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:03.403 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80329 00:18:03.403 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:03.403 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:03.403 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80329' 00:18:03.403 killing process with pid 80329 00:18:03.403 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80329 00:18:03.403 17:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80329 00:18:03.661 00:18:03.661 real 0m16.823s 00:18:03.661 user 0m32.365s 00:18:03.661 sys 0m4.592s 00:18:03.661 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:03.661 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:03.661 ************************************ 00:18:03.661 END TEST nvmf_digest_error 00:18:03.661 ************************************ 00:18:03.661 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:03.661 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:03.661 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:03.661 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:03.921 rmmod nvme_tcp 00:18:03.921 rmmod nvme_fabrics 00:18:03.921 rmmod nvme_keyring 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80329 ']' 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80329 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 80329 ']' 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 80329 00:18:03.921 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80329) - No such process 00:18:03.921 Process with pid 80329 is not found 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 80329 is not found' 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:03.921 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:04.181 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:04.181 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:04.181 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:04.181 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.181 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:04.181 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.181 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:18:04.181 00:18:04.181 real 0m34.565s 00:18:04.181 user 1m5.377s 00:18:04.181 sys 0m9.728s 00:18:04.181 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.181 ************************************ 00:18:04.181 END TEST nvmf_digest 00:18:04.181 ************************************ 00:18:04.181 17:03:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:04.181 17:03:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:18:04.181 17:03:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:18:04.181 17:03:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:04.181 17:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:04.181 17:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.181 17:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.181 ************************************ 00:18:04.181 START TEST nvmf_host_multipath 00:18:04.181 ************************************ 00:18:04.181 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:04.181 * Looking for test storage... 00:18:04.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:04.181 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:04.181 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:18:04.181 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:04.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.442 --rc genhtml_branch_coverage=1 00:18:04.442 --rc genhtml_function_coverage=1 00:18:04.442 --rc genhtml_legend=1 00:18:04.442 --rc geninfo_all_blocks=1 00:18:04.442 --rc geninfo_unexecuted_blocks=1 00:18:04.442 00:18:04.442 ' 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:04.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.442 --rc genhtml_branch_coverage=1 00:18:04.442 --rc genhtml_function_coverage=1 00:18:04.442 --rc genhtml_legend=1 00:18:04.442 --rc geninfo_all_blocks=1 00:18:04.442 --rc geninfo_unexecuted_blocks=1 00:18:04.442 00:18:04.442 ' 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:04.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.442 --rc genhtml_branch_coverage=1 00:18:04.442 --rc genhtml_function_coverage=1 00:18:04.442 --rc genhtml_legend=1 00:18:04.442 --rc geninfo_all_blocks=1 00:18:04.442 --rc geninfo_unexecuted_blocks=1 00:18:04.442 00:18:04.442 ' 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:04.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.442 --rc genhtml_branch_coverage=1 00:18:04.442 --rc genhtml_function_coverage=1 00:18:04.442 --rc genhtml_legend=1 00:18:04.442 --rc geninfo_all_blocks=1 00:18:04.442 --rc geninfo_unexecuted_blocks=1 00:18:04.442 00:18:04.442 ' 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:04.442 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:04.443 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:04.443 Cannot find device "nvmf_init_br" 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:04.443 Cannot find device "nvmf_init_br2" 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:04.443 Cannot find device "nvmf_tgt_br" 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:04.443 Cannot find device "nvmf_tgt_br2" 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:04.443 Cannot find device "nvmf_init_br" 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:04.443 Cannot find device "nvmf_init_br2" 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:04.443 Cannot find device "nvmf_tgt_br" 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:04.443 Cannot find device "nvmf_tgt_br2" 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:04.443 Cannot find device "nvmf_br" 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:04.443 Cannot find device "nvmf_init_if" 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:04.443 Cannot find device "nvmf_init_if2" 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:04.443 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:04.443 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:04.443 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:04.703 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:04.703 17:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:04.703 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:04.703 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:18:04.703 00:18:04.703 --- 10.0.0.3 ping statistics --- 00:18:04.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.703 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:04.703 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:04.703 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:18:04.703 00:18:04.703 --- 10.0.0.4 ping statistics --- 00:18:04.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.703 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:04.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:04.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:04.703 00:18:04.703 --- 10.0.0.1 ping statistics --- 00:18:04.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.703 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:04.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:04.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:18:04.703 00:18:04.703 --- 10.0.0.2 ping statistics --- 00:18:04.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.703 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:04.703 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:04.704 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80837 00:18:04.704 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80837 00:18:04.704 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80837 ']' 00:18:04.704 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.704 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.704 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.704 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.704 17:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:04.963 [2024-11-20 17:03:23.274950] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:18:04.963 [2024-11-20 17:03:23.275850] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.963 [2024-11-20 17:03:23.436101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:05.223 [2024-11-20 17:03:23.503367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.223 [2024-11-20 17:03:23.503448] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.223 [2024-11-20 17:03:23.503463] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.223 [2024-11-20 17:03:23.503474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.223 [2024-11-20 17:03:23.503483] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.223 [2024-11-20 17:03:23.506866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.223 [2024-11-20 17:03:23.506888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.223 [2024-11-20 17:03:23.561743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:05.791 17:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.791 17:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:18:05.791 17:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:05.791 17:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:05.791 17:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:05.791 17:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.791 17:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80837 00:18:05.791 17:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:06.358 [2024-11-20 17:03:24.641862] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.358 17:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:06.617 Malloc0 00:18:06.617 17:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:06.875 17:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:07.133 17:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:07.391 [2024-11-20 17:03:25.783535] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:07.391 17:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:07.651 [2024-11-20 17:03:26.103644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:07.651 17:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80896 00:18:07.651 17:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:07.651 17:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:07.651 17:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80896 /var/tmp/bdevperf.sock 00:18:07.651 17:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80896 ']' 00:18:07.651 17:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:07.651 17:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:07.651 17:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:07.651 17:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.651 17:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:09.025 17:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.025 17:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:18:09.025 17:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:09.025 17:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:09.645 Nvme0n1 00:18:09.645 17:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:09.904 Nvme0n1 00:18:09.904 17:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:09.904 17:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:10.836 17:03:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:10.836 17:03:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:11.094 17:03:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:11.659 17:03:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:11.659 17:03:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80943 00:18:11.659 17:03:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:11.659 17:03:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80837 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:18.219 17:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:18.219 17:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:18.219 17:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:18.219 17:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:18.219 Attaching 4 probes... 00:18:18.219 @path[10.0.0.3, 4421]: 12905 00:18:18.219 @path[10.0.0.3, 4421]: 13295 00:18:18.219 @path[10.0.0.3, 4421]: 13536 00:18:18.219 @path[10.0.0.3, 4421]: 14831 00:18:18.219 @path[10.0.0.3, 4421]: 14406 00:18:18.219 17:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:18.219 17:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:18.219 17:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:18.219 17:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:18.219 17:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:18.219 17:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:18.219 17:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80943 00:18:18.219 17:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:18.219 17:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:18.219 17:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:18.219 17:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:18.477 17:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:18.477 17:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81057 00:18:18.477 17:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:18.477 17:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80837 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:25.039 17:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:25.039 17:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:25.039 17:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:25.039 17:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:25.039 Attaching 4 probes... 00:18:25.039 @path[10.0.0.3, 4420]: 13964 00:18:25.039 @path[10.0.0.3, 4420]: 14147 00:18:25.039 @path[10.0.0.3, 4420]: 14311 00:18:25.039 @path[10.0.0.3, 4420]: 14105 00:18:25.039 @path[10.0.0.3, 4420]: 14882 00:18:25.039 17:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:25.039 17:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:25.039 17:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:25.039 17:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:25.039 17:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:25.039 17:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:25.039 17:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81057 00:18:25.039 17:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:25.039 17:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:25.039 17:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:25.039 17:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:25.296 17:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:25.296 17:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81175 00:18:25.296 17:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80837 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:25.296 17:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:31.858 17:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:31.858 17:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:31.858 17:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:31.858 17:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:31.858 Attaching 4 probes... 00:18:31.858 @path[10.0.0.3, 4421]: 15614 00:18:31.858 @path[10.0.0.3, 4421]: 19172 00:18:31.858 @path[10.0.0.3, 4421]: 18376 00:18:31.858 @path[10.0.0.3, 4421]: 17977 00:18:31.858 @path[10.0.0.3, 4421]: 17859 00:18:31.858 17:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:31.858 17:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:31.858 17:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:31.858 17:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:31.858 17:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:31.858 17:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:31.858 17:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81175 00:18:31.858 17:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:31.858 17:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:31.858 17:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:31.858 17:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:32.116 17:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:32.116 17:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80837 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:32.116 17:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81287 00:18:32.116 17:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:38.683 17:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:38.683 17:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:38.683 17:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:38.683 17:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:38.683 Attaching 4 probes... 00:18:38.683 00:18:38.683 00:18:38.683 00:18:38.683 00:18:38.683 00:18:38.683 17:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:38.683 17:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:38.683 17:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:38.684 17:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:38.684 17:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:38.684 17:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:38.684 17:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81287 00:18:38.684 17:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:38.684 17:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:38.684 17:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:38.684 17:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:39.250 17:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:39.250 17:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81404 00:18:39.250 17:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80837 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:39.250 17:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:45.813 17:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:45.813 17:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:45.813 17:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:45.813 17:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:45.813 Attaching 4 probes... 00:18:45.813 @path[10.0.0.3, 4421]: 16260 00:18:45.813 @path[10.0.0.3, 4421]: 16921 00:18:45.813 @path[10.0.0.3, 4421]: 16893 00:18:45.813 @path[10.0.0.3, 4421]: 17151 00:18:45.813 @path[10.0.0.3, 4421]: 16949 00:18:45.813 17:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:45.813 17:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:45.813 17:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:45.813 17:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:45.813 17:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:45.813 17:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:45.813 17:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81404 00:18:45.813 17:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:45.813 17:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:45.813 17:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:18:46.749 17:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:46.749 17:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81529 00:18:46.749 17:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80837 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:46.749 17:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:53.317 17:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:53.317 17:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:53.317 17:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:53.317 17:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:53.317 Attaching 4 probes... 00:18:53.317 @path[10.0.0.3, 4420]: 18272 00:18:53.317 @path[10.0.0.3, 4420]: 18751 00:18:53.317 @path[10.0.0.3, 4420]: 18838 00:18:53.317 @path[10.0.0.3, 4420]: 18649 00:18:53.317 @path[10.0.0.3, 4420]: 18395 00:18:53.317 17:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:53.317 17:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:53.317 17:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:53.317 17:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:53.317 17:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:53.317 17:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:53.317 17:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81529 00:18:53.317 17:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:53.317 17:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:53.317 [2024-11-20 17:04:11.631819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:53.317 17:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:53.576 17:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:00.144 17:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:00.144 17:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81702 00:19:00.144 17:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:00.144 17:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80837 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:05.451 17:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:05.451 17:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:05.726 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:05.726 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:05.726 Attaching 4 probes... 00:19:05.726 @path[10.0.0.3, 4421]: 17164 00:19:05.726 @path[10.0.0.3, 4421]: 17438 00:19:05.726 @path[10.0.0.3, 4421]: 17535 00:19:05.726 @path[10.0.0.3, 4421]: 17464 00:19:05.726 @path[10.0.0.3, 4421]: 17408 00:19:05.726 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:05.726 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:05.726 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:05.726 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:05.726 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:05.726 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:05.726 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81702 00:19:05.726 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:05.726 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80896 00:19:05.726 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80896 ']' 00:19:05.726 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80896 00:19:05.726 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:19:05.726 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.726 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80896 00:19:05.998 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:05.998 killing process with pid 80896 00:19:05.998 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:05.998 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80896' 00:19:05.998 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80896 00:19:05.998 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80896 00:19:05.998 { 00:19:05.998 "results": [ 00:19:05.998 { 00:19:05.998 "job": "Nvme0n1", 00:19:05.998 "core_mask": "0x4", 00:19:05.998 "workload": "verify", 00:19:05.998 "status": "terminated", 00:19:05.998 "verify_range": { 00:19:05.998 "start": 0, 00:19:05.998 "length": 16384 00:19:05.998 }, 00:19:05.998 "queue_depth": 128, 00:19:05.998 "io_size": 4096, 00:19:05.998 "runtime": 55.887465, 00:19:05.998 "iops": 7126.1775784605725, 00:19:05.998 "mibps": 27.83663116586161, 00:19:05.998 "io_failed": 0, 00:19:05.998 "io_timeout": 0, 00:19:05.998 "avg_latency_us": 17930.473083728833, 00:19:05.998 "min_latency_us": 128.46545454545455, 00:19:05.998 "max_latency_us": 7046430.72 00:19:05.998 } 00:19:05.998 ], 00:19:05.998 "core_count": 1 00:19:05.998 } 00:19:05.998 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80896 00:19:05.998 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:05.998 [2024-11-20 17:03:26.194301] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:19:05.998 [2024-11-20 17:03:26.194426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80896 ] 00:19:05.998 [2024-11-20 17:03:26.349066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.998 [2024-11-20 17:03:26.419609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.998 [2024-11-20 17:03:26.479269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:05.998 Running I/O for 90 seconds... 00:19:05.998 6804.00 IOPS, 26.58 MiB/s [2024-11-20T17:04:24.534Z] 6820.50 IOPS, 26.64 MiB/s [2024-11-20T17:04:24.534Z] 6765.67 IOPS, 26.43 MiB/s [2024-11-20T17:04:24.534Z] 6738.25 IOPS, 26.32 MiB/s [2024-11-20T17:04:24.534Z] 6721.80 IOPS, 26.26 MiB/s [2024-11-20T17:04:24.534Z] 6838.83 IOPS, 26.71 MiB/s [2024-11-20T17:04:24.534Z] 6904.00 IOPS, 26.97 MiB/s [2024-11-20T17:04:24.534Z] 6889.12 IOPS, 26.91 MiB/s [2024-11-20T17:04:24.534Z] [2024-11-20 17:03:36.779539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.998 [2024-11-20 17:03:36.779620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.998 [2024-11-20 17:03:36.779692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.998 [2024-11-20 17:03:36.779716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:05.998 [2024-11-20 17:03:36.779741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.998 [2024-11-20 17:03:36.779759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:05.998 [2024-11-20 17:03:36.779781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.998 [2024-11-20 17:03:36.779803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:05.998 [2024-11-20 17:03:36.779837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:69216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.998 [2024-11-20 17:03:36.779857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:05.998 [2024-11-20 17:03:36.779880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.998 [2024-11-20 17:03:36.779897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:05.998 [2024-11-20 17:03:36.779919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.998 [2024-11-20 17:03:36.779935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:05.998 [2024-11-20 17:03:36.779957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.998 [2024-11-20 17:03:36.779974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:05.998 [2024-11-20 17:03:36.779996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.998 [2024-11-20 17:03:36.780013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:05.998 [2024-11-20 17:03:36.780035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.998 [2024-11-20 17:03:36.780080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:05.998 [2024-11-20 17:03:36.780105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.998 [2024-11-20 17:03:36.780123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:05.998 [2024-11-20 17:03:36.780145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.998 [2024-11-20 17:03:36.780168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:05.998 [2024-11-20 17:03:36.780189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.998 [2024-11-20 17:03:36.780205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:05.998 [2024-11-20 17:03:36.780227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:69288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.998 [2024-11-20 17:03:36.780269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:05.998 [2024-11-20 17:03:36.780294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.998 [2024-11-20 17:03:36.780311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:05.998 [2024-11-20 17:03:36.780333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:69304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.998 [2024-11-20 17:03:36.780350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:05.998 [2024-11-20 17:03:36.780372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.998 [2024-11-20 17:03:36.780388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:05.998 [2024-11-20 17:03:36.781662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:05.998 [2024-11-20 17:03:36.781695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:05.998 [2024-11-20 17:03:36.781724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.781744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.781766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:69320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.781782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.781803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.781818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.781855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.781875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.781911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.781930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.781951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:69352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.781967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.782005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.782022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.782048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.782067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.782088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.782105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.782127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.782144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.782165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.782182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.782204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:69400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.782221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.782243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.782259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.782281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.782298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.782320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.782337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.783076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.783107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.783148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:69440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.783169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.783192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.783208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.783230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:69456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.783247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.783269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:69464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.783286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.783308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.783324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.783346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:69480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.783363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.783393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:69488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.783409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.783431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.783447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.783469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:69504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.783486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.783507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:69512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.783524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.783553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.783569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.783591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.783608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.783630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:69536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.783655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.783685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.783709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.783740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.783758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.783799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.783820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.783861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.783879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.783901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.783918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.783939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.783956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.783978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.783994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.784023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.784040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.784062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.784079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.784100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:05.999 [2024-11-20 17:03:36.784117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:05.999 [2024-11-20 17:03:36.784138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.784155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.784176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:69632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.784211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.784248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.784275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.784298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:69648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.784315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.784336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.784353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.784374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:69664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.784391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.784412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.784429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.784451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:69680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.784468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.784493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:69688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.784511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.784554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:69696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.784571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.784599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:69704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.784615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.784637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.784653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.784674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.784698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.784720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:69728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.784736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.784767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:69736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.784785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.784807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:69744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.784844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.784891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:69752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.784913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.784938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.784957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.784979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.784995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.785017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:69776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.785033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.785055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:69784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.785072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.785093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:69792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.785110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.785132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:69800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.785148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.785170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.785186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.785211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:69816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.785229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.785267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.785295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.785327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:69832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.785345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.785367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.785384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.785406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:69848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.785429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.785460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:69856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.785477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.785505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:69864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.785521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.785543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.785559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.785593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:69880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.785614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.785651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:69888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.785668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.785689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:69896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.785705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.785726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:69904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.785743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.785764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:69912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.785779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.785800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.785819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.785852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.000 [2024-11-20 17:03:36.785882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.000 [2024-11-20 17:03:36.785905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.785923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.785944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.785960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.785987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:69952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.786003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.786024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:69960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.786040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.786062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:69968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.786078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.786099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.786115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.786136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:69984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.786152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.786173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.786193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.786215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.786231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.786268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.786289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.786311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:70016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.786329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.786350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.786389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.786414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.786431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.786452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.786468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.786490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.786506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.786526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.786542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.786564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.786580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.786601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.786617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.786645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.786661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.786683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.786699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.786720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.786736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.786757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.786782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.786803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.786819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.786855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.786883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.786918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.786935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.786960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.786978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.787007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.787023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.787044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.787060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.787081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.787097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.787118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.787134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.787155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.787171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:36.787192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:36.787219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:06.001 6878.67 IOPS, 26.87 MiB/s [2024-11-20T17:04:24.537Z] 6895.90 IOPS, 26.94 MiB/s [2024-11-20T17:04:24.537Z] 6910.36 IOPS, 26.99 MiB/s [2024-11-20T17:04:24.537Z] 6930.58 IOPS, 27.07 MiB/s [2024-11-20T17:04:24.537Z] 6939.00 IOPS, 27.11 MiB/s [2024-11-20T17:04:24.537Z] 6982.79 IOPS, 27.28 MiB/s [2024-11-20T17:04:24.537Z] [2024-11-20 17:03:43.314153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:43.314228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:43.314337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:43.314362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:43.314386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:43.314404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:43.314426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:43.314472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:43.314497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:43.314514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:06.001 [2024-11-20 17:03:43.314535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.001 [2024-11-20 17:03:43.314552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.314573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.314589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.314610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.314626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.314652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.314685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.314720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.314735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.314771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.314785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.314805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.314820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.314839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.314854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.314874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.314888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.314908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.314938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.314960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.314985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.315024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.315044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.315065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.315081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.315101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.315116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.315135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.315150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.315170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.315185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.315206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.315222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.315242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.315257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.315311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.315328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.315353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.315370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.315392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.315409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.315431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.315448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.315469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.315486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.315518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.315536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.315557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.315574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.315595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.315612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.315648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.315694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.315725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.315744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.315765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.315781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.315801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.315816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.315836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.315867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.315888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.002 [2024-11-20 17:03:43.315905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:06.002 [2024-11-20 17:03:43.315939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.315957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.315979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.315995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.316016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.316032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.316065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.316083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.316104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.316120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.316141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.316156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.316176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.316192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.316212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.316227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.316276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.316295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.316317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.316333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.316355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.316371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.316406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.316427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.316449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.316466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.316487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.316504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.316525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.316541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.316563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.316590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.316614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.316646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.316697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.316712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.316732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.316747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.316779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.316797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.316818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.316833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.316853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.316868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.316904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.316923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.316944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.316959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.316979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.316994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.317014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.317029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.317048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.317064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.317083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.317107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.317128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.317144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.317164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.317179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.317199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.317214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.317234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.317249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.317285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.317317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.317340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.317357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.317378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.317395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.317439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.317461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.317483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.317500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.317522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.317538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:06.003 [2024-11-20 17:03:43.317560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.003 [2024-11-20 17:03:43.317576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.317598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.004 [2024-11-20 17:03:43.317614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.317692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.004 [2024-11-20 17:03:43.317709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.317729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.004 [2024-11-20 17:03:43.317744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.317764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.004 [2024-11-20 17:03:43.317779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.317798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.004 [2024-11-20 17:03:43.317813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.317833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.004 [2024-11-20 17:03:43.317849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.317868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.004 [2024-11-20 17:03:43.317883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.317915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.004 [2024-11-20 17:03:43.317950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.317971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.004 [2024-11-20 17:03:43.317987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.318007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.004 [2024-11-20 17:03:43.318023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.318060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.004 [2024-11-20 17:03:43.318077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.318098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.004 [2024-11-20 17:03:43.318114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.319131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.004 [2024-11-20 17:03:43.319161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.319204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.004 [2024-11-20 17:03:43.319225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.319251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.004 [2024-11-20 17:03:43.319283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.319327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.004 [2024-11-20 17:03:43.319344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.319371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.004 [2024-11-20 17:03:43.319388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.319415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.004 [2024-11-20 17:03:43.319432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.319459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.004 [2024-11-20 17:03:43.319476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.319503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.004 [2024-11-20 17:03:43.319520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.319549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.004 [2024-11-20 17:03:43.319567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.319594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.004 [2024-11-20 17:03:43.319611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.319652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.004 [2024-11-20 17:03:43.319683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.319709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.004 [2024-11-20 17:03:43.319725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.319751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.004 [2024-11-20 17:03:43.319767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.319792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.004 [2024-11-20 17:03:43.319816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.319860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.004 [2024-11-20 17:03:43.319877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.319915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.004 [2024-11-20 17:03:43.319936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.319963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.004 [2024-11-20 17:03:43.319980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.320006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.004 [2024-11-20 17:03:43.320023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.320056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.004 [2024-11-20 17:03:43.320072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.320098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.004 [2024-11-20 17:03:43.320114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.320141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.004 [2024-11-20 17:03:43.320157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.320183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.004 [2024-11-20 17:03:43.320200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.320226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.004 [2024-11-20 17:03:43.320272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.320318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.004 [2024-11-20 17:03:43.320340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.320369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.004 [2024-11-20 17:03:43.320387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.320414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.004 [2024-11-20 17:03:43.320441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:06.004 [2024-11-20 17:03:43.320471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.004 [2024-11-20 17:03:43.320490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:43.320517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.005 [2024-11-20 17:03:43.320534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:43.320562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.005 [2024-11-20 17:03:43.320579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:43.320606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.005 [2024-11-20 17:03:43.320623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:43.320665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.005 [2024-11-20 17:03:43.320697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:43.320722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.005 [2024-11-20 17:03:43.320739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:43.320764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.005 [2024-11-20 17:03:43.320780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:43.320806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.005 [2024-11-20 17:03:43.320822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:43.320847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.005 [2024-11-20 17:03:43.320863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:43.320920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.005 [2024-11-20 17:03:43.320938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:43.320965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.005 [2024-11-20 17:03:43.320981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:43.321008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.005 [2024-11-20 17:03:43.321024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:43.321060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.005 [2024-11-20 17:03:43.321078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:43.321105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.005 [2024-11-20 17:03:43.321122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:06.005 6961.00 IOPS, 27.19 MiB/s [2024-11-20T17:04:24.541Z] 6564.94 IOPS, 25.64 MiB/s [2024-11-20T17:04:24.541Z] 6737.94 IOPS, 26.32 MiB/s [2024-11-20T17:04:24.541Z] 6895.06 IOPS, 26.93 MiB/s [2024-11-20T17:04:24.541Z] 7013.11 IOPS, 27.39 MiB/s [2024-11-20T17:04:24.541Z] 7111.65 IOPS, 27.78 MiB/s [2024-11-20T17:04:24.541Z] 7200.43 IOPS, 28.13 MiB/s [2024-11-20T17:04:24.541Z] 7278.59 IOPS, 28.43 MiB/s [2024-11-20T17:04:24.541Z] [2024-11-20 17:03:50.583422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:115912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.005 [2024-11-20 17:03:50.583499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:50.583574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:115920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.005 [2024-11-20 17:03:50.583597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:50.583618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:115928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.005 [2024-11-20 17:03:50.583634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:50.583654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:115936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.005 [2024-11-20 17:03:50.583669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:50.583689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.005 [2024-11-20 17:03:50.583704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:50.583723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.005 [2024-11-20 17:03:50.583739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:50.583758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.005 [2024-11-20 17:03:50.583773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:50.583792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:115968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.005 [2024-11-20 17:03:50.583807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:50.583894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:115976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.005 [2024-11-20 17:03:50.583916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:50.583938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:115984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.005 [2024-11-20 17:03:50.583982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:50.584005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.005 [2024-11-20 17:03:50.584021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:50.584041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:116000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.005 [2024-11-20 17:03:50.584056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:50.584075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.005 [2024-11-20 17:03:50.584090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:50.584110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:116016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.005 [2024-11-20 17:03:50.584125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:50.584144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:116024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.005 [2024-11-20 17:03:50.584159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:50.584180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:116032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.005 [2024-11-20 17:03:50.584212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:50.584233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.005 [2024-11-20 17:03:50.584261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:50.584289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.005 [2024-11-20 17:03:50.584305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:50.584326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.005 [2024-11-20 17:03:50.584342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:50.584362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:115616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.005 [2024-11-20 17:03:50.584378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:50.584398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.005 [2024-11-20 17:03:50.584414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:50.584435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:115632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.005 [2024-11-20 17:03:50.584460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:50.584483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:115640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.005 [2024-11-20 17:03:50.584500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:06.005 [2024-11-20 17:03:50.584552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:115648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.005 [2024-11-20 17:03:50.584567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.584612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:116040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.584632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.584671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:116048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.584687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.584707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:116056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.584723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.584743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:116064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.584758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.584778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:116072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.584793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.584813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.584828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.584848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:116088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.584863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.584915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:116096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.584933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.584954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:116104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.584985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.585035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:116120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.585074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:116128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.585109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:116136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.585145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.585180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:116152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.585231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:116160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.585267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:116168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.585303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:116176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.585357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:116184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.585394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:116192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.585431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:115656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.006 [2024-11-20 17:03:50.585470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:115664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.006 [2024-11-20 17:03:50.585507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:115672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.006 [2024-11-20 17:03:50.585554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.006 [2024-11-20 17:03:50.585592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.006 [2024-11-20 17:03:50.585629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.006 [2024-11-20 17:03:50.585681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:115704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.006 [2024-11-20 17:03:50.585718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.006 [2024-11-20 17:03:50.585754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:116200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.585791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:116208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.585827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:116216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.585863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:116224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.585915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:116232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.585953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.585974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:116240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.585990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.586038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:116248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.586071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.586091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:116256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.586106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.586126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:116264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.586141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.586160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:116272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.006 [2024-11-20 17:03:50.586175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:06.006 [2024-11-20 17:03:50.586195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:116280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.007 [2024-11-20 17:03:50.586209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.586229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:116288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.007 [2024-11-20 17:03:50.586244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.586279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:116296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.007 [2024-11-20 17:03:50.586298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.586318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:116304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.007 [2024-11-20 17:03:50.586350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.586371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:116312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.007 [2024-11-20 17:03:50.586387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.586407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:116320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.007 [2024-11-20 17:03:50.586423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.586443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:116328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.007 [2024-11-20 17:03:50.586458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.586478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:116336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.007 [2024-11-20 17:03:50.586492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.586512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.007 [2024-11-20 17:03:50.586537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.586559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.007 [2024-11-20 17:03:50.586574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.586595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:115736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.007 [2024-11-20 17:03:50.586610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.586630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.007 [2024-11-20 17:03:50.586645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.586665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.007 [2024-11-20 17:03:50.586680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.586700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.007 [2024-11-20 17:03:50.586731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.586751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.007 [2024-11-20 17:03:50.586766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.586785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.007 [2024-11-20 17:03:50.586800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.586819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:116344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.007 [2024-11-20 17:03:50.586834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.586853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:116352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.007 [2024-11-20 17:03:50.586868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.586899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:116360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.007 [2024-11-20 17:03:50.586917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.586937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:116368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.007 [2024-11-20 17:03:50.586953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.586973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:116376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.007 [2024-11-20 17:03:50.586995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.587016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.007 [2024-11-20 17:03:50.587032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.587051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:116392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.007 [2024-11-20 17:03:50.587066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.587086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:116400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.007 [2024-11-20 17:03:50.587101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.587120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:116408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.007 [2024-11-20 17:03:50.587134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.587154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:116416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.007 [2024-11-20 17:03:50.587169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.587188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:116424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.007 [2024-11-20 17:03:50.587203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.587223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:116432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.007 [2024-11-20 17:03:50.587238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.587257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:116440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.007 [2024-11-20 17:03:50.587272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.587291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.007 [2024-11-20 17:03:50.587306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.587325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:116456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.007 [2024-11-20 17:03:50.587340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:06.007 [2024-11-20 17:03:50.587359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:116464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.008 [2024-11-20 17:03:50.587374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.587394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:116472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.008 [2024-11-20 17:03:50.587408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.587435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.008 [2024-11-20 17:03:50.587451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.587471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:116488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.008 [2024-11-20 17:03:50.587486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.587505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.008 [2024-11-20 17:03:50.587520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.587540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.008 [2024-11-20 17:03:50.587556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.587575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.008 [2024-11-20 17:03:50.587590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.587610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.008 [2024-11-20 17:03:50.587625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.587644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.008 [2024-11-20 17:03:50.587659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.587679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:115824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.008 [2024-11-20 17:03:50.587693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.587712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.008 [2024-11-20 17:03:50.587727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.587747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:115840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.008 [2024-11-20 17:03:50.587762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.587782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:115848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.008 [2024-11-20 17:03:50.587796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.587816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:115856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.008 [2024-11-20 17:03:50.587842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.587871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.008 [2024-11-20 17:03:50.587888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.587907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:115872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.008 [2024-11-20 17:03:50.587922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.587942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.008 [2024-11-20 17:03:50.587956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.587976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.008 [2024-11-20 17:03:50.587990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.588010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.008 [2024-11-20 17:03:50.588024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.588740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.008 [2024-11-20 17:03:50.588768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.588801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:116496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.008 [2024-11-20 17:03:50.588818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.588857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:116504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.008 [2024-11-20 17:03:50.588888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.588917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.008 [2024-11-20 17:03:50.588933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.588959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:116520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.008 [2024-11-20 17:03:50.588975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.589002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:116528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.008 [2024-11-20 17:03:50.589017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.589043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:116536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.008 [2024-11-20 17:03:50.589059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.589098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:116544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.008 [2024-11-20 17:03:50.589116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.589175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.008 [2024-11-20 17:03:50.589196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.589239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:116560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.008 [2024-11-20 17:03:50.589255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.589281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:116568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.008 [2024-11-20 17:03:50.589297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.589323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.008 [2024-11-20 17:03:50.589339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.589365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.008 [2024-11-20 17:03:50.589381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.589407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:116592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.008 [2024-11-20 17:03:50.589423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:03:50.589450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:116600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.008 [2024-11-20 17:03:50.589482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:06.008 7039.35 IOPS, 27.50 MiB/s [2024-11-20T17:04:24.544Z] 6746.04 IOPS, 26.35 MiB/s [2024-11-20T17:04:24.544Z] 6476.20 IOPS, 25.30 MiB/s [2024-11-20T17:04:24.544Z] 6227.12 IOPS, 24.32 MiB/s [2024-11-20T17:04:24.544Z] 5996.48 IOPS, 23.42 MiB/s [2024-11-20T17:04:24.544Z] 5782.32 IOPS, 22.59 MiB/s [2024-11-20T17:04:24.544Z] 5582.93 IOPS, 21.81 MiB/s [2024-11-20T17:04:24.544Z] 5613.90 IOPS, 21.93 MiB/s [2024-11-20T17:04:24.544Z] 5701.19 IOPS, 22.27 MiB/s [2024-11-20T17:04:24.544Z] 5786.03 IOPS, 22.60 MiB/s [2024-11-20T17:04:24.544Z] 5871.30 IOPS, 22.93 MiB/s [2024-11-20T17:04:24.544Z] 5948.03 IOPS, 23.23 MiB/s [2024-11-20T17:04:24.544Z] 6020.83 IOPS, 23.52 MiB/s [2024-11-20T17:04:24.544Z] [2024-11-20 17:04:04.092339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.008 [2024-11-20 17:04:04.092410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:04:04.092471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.008 [2024-11-20 17:04:04.092494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:06.008 [2024-11-20 17:04:04.092518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.009 [2024-11-20 17:04:04.092535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.092584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.009 [2024-11-20 17:04:04.092602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.092636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.009 [2024-11-20 17:04:04.092652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.092703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.009 [2024-11-20 17:04:04.092718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.092739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.009 [2024-11-20 17:04:04.092754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.092774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.009 [2024-11-20 17:04:04.092789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.092809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.009 [2024-11-20 17:04:04.092824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.092844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.009 [2024-11-20 17:04:04.092872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.092895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.009 [2024-11-20 17:04:04.092911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.092931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.009 [2024-11-20 17:04:04.092946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.092983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.009 [2024-11-20 17:04:04.092998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.093019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.009 [2024-11-20 17:04:04.093034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.093055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.009 [2024-11-20 17:04:04.093070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.093091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.009 [2024-11-20 17:04:04.093117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.093141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.009 [2024-11-20 17:04:04.093158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.093181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.009 [2024-11-20 17:04:04.093197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.093218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:29656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.009 [2024-11-20 17:04:04.093234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.093255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:29664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.009 [2024-11-20 17:04:04.093271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.093308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.009 [2024-11-20 17:04:04.093325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.093346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:29680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.009 [2024-11-20 17:04:04.093363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.093385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.009 [2024-11-20 17:04:04.093401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.093423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.009 [2024-11-20 17:04:04.093439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.093460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.009 [2024-11-20 17:04:04.093476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.093498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.009 [2024-11-20 17:04:04.093514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.093535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.009 [2024-11-20 17:04:04.093551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.093573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:29728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.009 [2024-11-20 17:04:04.093597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.093620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:29736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.009 [2024-11-20 17:04:04.093637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.093659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.009 [2024-11-20 17:04:04.093675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.093697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.009 [2024-11-20 17:04:04.093713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.093750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.009 [2024-11-20 17:04:04.093767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.093817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.009 [2024-11-20 17:04:04.093839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.093856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.009 [2024-11-20 17:04:04.093891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.093926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.009 [2024-11-20 17:04:04.093940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.093956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.009 [2024-11-20 17:04:04.093970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.093986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.009 [2024-11-20 17:04:04.094000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.094015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.009 [2024-11-20 17:04:04.094030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.094045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.009 [2024-11-20 17:04:04.094059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.094074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.009 [2024-11-20 17:04:04.094088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.094114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:29768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.009 [2024-11-20 17:04:04.094130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.009 [2024-11-20 17:04:04.094146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.010 [2024-11-20 17:04:04.094160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.010 [2024-11-20 17:04:04.094189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:29792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.010 [2024-11-20 17:04:04.094218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:29800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.010 [2024-11-20 17:04:04.094248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:29808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.010 [2024-11-20 17:04:04.094277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.010 [2024-11-20 17:04:04.094306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:29824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.010 [2024-11-20 17:04:04.094335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.010 [2024-11-20 17:04:04.094364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.010 [2024-11-20 17:04:04.094394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.010 [2024-11-20 17:04:04.094430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.010 [2024-11-20 17:04:04.094459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.010 [2024-11-20 17:04:04.094495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.010 [2024-11-20 17:04:04.094526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.010 [2024-11-20 17:04:04.094556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.010 [2024-11-20 17:04:04.094585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.010 [2024-11-20 17:04:04.094614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.010 [2024-11-20 17:04:04.094643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.010 [2024-11-20 17:04:04.094672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.010 [2024-11-20 17:04:04.094701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.010 [2024-11-20 17:04:04.094730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.010 [2024-11-20 17:04:04.094759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.010 [2024-11-20 17:04:04.094788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.010 [2024-11-20 17:04:04.094817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.010 [2024-11-20 17:04:04.094870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.010 [2024-11-20 17:04:04.094907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.010 [2024-11-20 17:04:04.094939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.010 [2024-11-20 17:04:04.094968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.094991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.010 [2024-11-20 17:04:04.095005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.095020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.010 [2024-11-20 17:04:04.095035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.095051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.010 [2024-11-20 17:04:04.095065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.095080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.010 [2024-11-20 17:04:04.095102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.095117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.010 [2024-11-20 17:04:04.095131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.095146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.010 [2024-11-20 17:04:04.095176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.095191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.010 [2024-11-20 17:04:04.095204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.095219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:29856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.010 [2024-11-20 17:04:04.095232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.095247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:29864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.010 [2024-11-20 17:04:04.095260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.095291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.010 [2024-11-20 17:04:04.095305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.095327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:29880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.010 [2024-11-20 17:04:04.095342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.095358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:29888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.010 [2024-11-20 17:04:04.095372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.010 [2024-11-20 17:04:04.095387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.010 [2024-11-20 17:04:04.095401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.095417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.011 [2024-11-20 17:04:04.095430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.095446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.011 [2024-11-20 17:04:04.095460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.095475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.011 [2024-11-20 17:04:04.095489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.095504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:29928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.011 [2024-11-20 17:04:04.095518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.095533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.011 [2024-11-20 17:04:04.095547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.095563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.011 [2024-11-20 17:04:04.095577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.095592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:29952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.011 [2024-11-20 17:04:04.095606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.095622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.011 [2024-11-20 17:04:04.095635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.095666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.011 [2024-11-20 17:04:04.095680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.095694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.011 [2024-11-20 17:04:04.095714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.095729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.011 [2024-11-20 17:04:04.095743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.095758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:30440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.011 [2024-11-20 17:04:04.095772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.095786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.011 [2024-11-20 17:04:04.095800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.095814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.011 [2024-11-20 17:04:04.095828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.095842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.011 [2024-11-20 17:04:04.095872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.095896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.011 [2024-11-20 17:04:04.095913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.095928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.011 [2024-11-20 17:04:04.095942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.095958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.011 [2024-11-20 17:04:04.095972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.095988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.011 [2024-11-20 17:04:04.096012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.096027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.011 [2024-11-20 17:04:04.096041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.096056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.011 [2024-11-20 17:04:04.096070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.096086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.011 [2024-11-20 17:04:04.096099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.096121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e0290 is same with the state(6) to be set 00:19:06.011 [2024-11-20 17:04:04.096138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.011 [2024-11-20 17:04:04.096149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.011 [2024-11-20 17:04:04.096160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30016 len:8 PRP1 0x0 PRP2 0x0 00:19:06.011 [2024-11-20 17:04:04.096174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.096188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.011 [2024-11-20 17:04:04.096210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.011 [2024-11-20 17:04:04.096220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30472 len:8 PRP1 0x0 PRP2 0x0 00:19:06.011 [2024-11-20 17:04:04.096234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.096247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.011 [2024-11-20 17:04:04.096285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.011 [2024-11-20 17:04:04.096297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30480 len:8 PRP1 0x0 PRP2 0x0 00:19:06.011 [2024-11-20 17:04:04.096311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.096324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.011 [2024-11-20 17:04:04.096334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.011 [2024-11-20 17:04:04.096344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30488 len:8 PRP1 0x0 PRP2 0x0 00:19:06.011 [2024-11-20 17:04:04.096358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.096371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.011 [2024-11-20 17:04:04.096381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.011 [2024-11-20 17:04:04.096391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30496 len:8 PRP1 0x0 PRP2 0x0 00:19:06.011 [2024-11-20 17:04:04.096405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.096418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.011 [2024-11-20 17:04:04.096428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.011 [2024-11-20 17:04:04.096439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30504 len:8 PRP1 0x0 PRP2 0x0 00:19:06.011 [2024-11-20 17:04:04.096452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.096465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.011 [2024-11-20 17:04:04.096475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.011 [2024-11-20 17:04:04.096485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30512 len:8 PRP1 0x0 PRP2 0x0 00:19:06.011 [2024-11-20 17:04:04.096498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.011 [2024-11-20 17:04:04.096511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.012 [2024-11-20 17:04:04.096521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.012 [2024-11-20 17:04:04.096542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30520 len:8 PRP1 0x0 PRP2 0x0 00:19:06.012 [2024-11-20 17:04:04.096557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.012 [2024-11-20 17:04:04.096571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.012 [2024-11-20 17:04:04.096581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.012 [2024-11-20 17:04:04.096592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30528 len:8 PRP1 0x0 PRP2 0x0 00:19:06.012 [2024-11-20 17:04:04.096608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.012 [2024-11-20 17:04:04.096630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.012 [2024-11-20 17:04:04.096639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.012 [2024-11-20 17:04:04.096665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30536 len:8 PRP1 0x0 PRP2 0x0 00:19:06.012 [2024-11-20 17:04:04.096677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.012 [2024-11-20 17:04:04.096690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.012 [2024-11-20 17:04:04.096700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.012 [2024-11-20 17:04:04.096709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30544 len:8 PRP1 0x0 PRP2 0x0 00:19:06.012 [2024-11-20 17:04:04.096721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.012 [2024-11-20 17:04:04.096734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.012 [2024-11-20 17:04:04.096743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.012 [2024-11-20 17:04:04.096753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30552 len:8 PRP1 0x0 PRP2 0x0 00:19:06.012 [2024-11-20 17:04:04.096765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.012 [2024-11-20 17:04:04.096778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.012 [2024-11-20 17:04:04.096787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.012 [2024-11-20 17:04:04.096797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30560 len:8 PRP1 0x0 PRP2 0x0 00:19:06.012 [2024-11-20 17:04:04.096809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.012 [2024-11-20 17:04:04.096821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.012 [2024-11-20 17:04:04.096838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.012 [2024-11-20 17:04:04.096848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30568 len:8 PRP1 0x0 PRP2 0x0 00:19:06.012 [2024-11-20 17:04:04.096874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.012 [2024-11-20 17:04:04.096888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.012 [2024-11-20 17:04:04.096897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.012 [2024-11-20 17:04:04.096907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30576 len:8 PRP1 0x0 PRP2 0x0 00:19:06.012 [2024-11-20 17:04:04.096920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.012 [2024-11-20 17:04:04.096932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.012 [2024-11-20 17:04:04.096948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.012 [2024-11-20 17:04:04.096960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30584 len:8 PRP1 0x0 PRP2 0x0 00:19:06.012 [2024-11-20 17:04:04.096972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.012 [2024-11-20 17:04:04.096985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.012 [2024-11-20 17:04:04.096995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.012 [2024-11-20 17:04:04.097005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30592 len:8 PRP1 0x0 PRP2 0x0 00:19:06.012 [2024-11-20 17:04:04.097018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.012 [2024-11-20 17:04:04.097030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.012 [2024-11-20 17:04:04.097039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.012 [2024-11-20 17:04:04.097049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30600 len:8 PRP1 0x0 PRP2 0x0 00:19:06.012 [2024-11-20 17:04:04.097062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.012 [2024-11-20 17:04:04.097074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.012 [2024-11-20 17:04:04.097084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.012 [2024-11-20 17:04:04.097093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30608 len:8 PRP1 0x0 PRP2 0x0 00:19:06.012 [2024-11-20 17:04:04.097105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.012 [2024-11-20 17:04:04.097118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.012 [2024-11-20 17:04:04.097127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.012 [2024-11-20 17:04:04.097137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30616 len:8 PRP1 0x0 PRP2 0x0 00:19:06.012 [2024-11-20 17:04:04.097149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.012 [2024-11-20 17:04:04.097162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.012 [2024-11-20 17:04:04.097171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.012 [2024-11-20 17:04:04.097181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30624 len:8 PRP1 0x0 PRP2 0x0 00:19:06.012 [2024-11-20 17:04:04.097193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.012 [2024-11-20 17:04:04.097206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.012 [2024-11-20 17:04:04.097220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.012 [2024-11-20 17:04:04.097230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30632 len:8 PRP1 0x0 PRP2 0x0 00:19:06.012 [2024-11-20 17:04:04.097243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.012 [2024-11-20 17:04:04.097256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.012 [2024-11-20 17:04:04.097265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.012 [2024-11-20 17:04:04.097290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30640 len:8 PRP1 0x0 PRP2 0x0 00:19:06.012 [2024-11-20 17:04:04.097304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.012 [2024-11-20 17:04:04.097324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.012 [2024-11-20 17:04:04.097335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.012 [2024-11-20 17:04:04.097345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30648 len:8 PRP1 0x0 PRP2 0x0 00:19:06.012 [2024-11-20 17:04:04.097359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.012 [2024-11-20 17:04:04.097372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.012 [2024-11-20 17:04:04.097390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.012 [2024-11-20 17:04:04.097401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30656 len:8 PRP1 0x0 PRP2 0x0 00:19:06.012 [2024-11-20 17:04:04.097414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.012 [2024-11-20 17:04:04.097570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:06.012 [2024-11-20 17:04:04.097601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.012 [2024-11-20 17:04:04.097617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:06.012 [2024-11-20 17:04:04.097631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.012 [2024-11-20 17:04:04.097646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:06.012 [2024-11-20 17:04:04.097689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.012 [2024-11-20 17:04:04.097702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:06.012 [2024-11-20 17:04:04.097715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.012 [2024-11-20 17:04:04.097729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.012 [2024-11-20 17:04:04.097743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.012 [2024-11-20 17:04:04.097762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16511d0 is same with the state(6) to be set 00:19:06.012 [2024-11-20 17:04:04.098917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:06.012 [2024-11-20 17:04:04.098958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16511d0 (9): Bad file descriptor 00:19:06.012 [2024-11-20 17:04:04.099380] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:06.012 [2024-11-20 17:04:04.099415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16511d0 with addr=10.0.0.3, port=4421 00:19:06.012 [2024-11-20 17:04:04.099434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16511d0 is same with the state(6) to be set 00:19:06.012 [2024-11-20 17:04:04.099546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16511d0 (9): Bad file descriptor 00:19:06.012 [2024-11-20 17:04:04.099591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:06.012 [2024-11-20 17:04:04.099617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:06.013 [2024-11-20 17:04:04.099662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:06.013 [2024-11-20 17:04:04.099693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:06.013 [2024-11-20 17:04:04.099707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:06.013 6085.22 IOPS, 23.77 MiB/s [2024-11-20T17:04:24.549Z] 6167.24 IOPS, 24.09 MiB/s [2024-11-20T17:04:24.549Z] 6243.26 IOPS, 24.39 MiB/s [2024-11-20T17:04:24.549Z] 6323.79 IOPS, 24.70 MiB/s [2024-11-20T17:04:24.549Z] 6404.30 IOPS, 25.02 MiB/s [2024-11-20T17:04:24.549Z] 6476.00 IOPS, 25.30 MiB/s [2024-11-20T17:04:24.549Z] 6541.81 IOPS, 25.55 MiB/s [2024-11-20T17:04:24.549Z] 6603.26 IOPS, 25.79 MiB/s [2024-11-20T17:04:24.549Z] 6662.82 IOPS, 26.03 MiB/s [2024-11-20T17:04:24.549Z] 6723.64 IOPS, 26.26 MiB/s [2024-11-20T17:04:24.549Z] [2024-11-20 17:04:14.150711] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:06.013 6778.28 IOPS, 26.48 MiB/s [2024-11-20T17:04:24.549Z] 6826.81 IOPS, 26.67 MiB/s [2024-11-20T17:04:24.549Z] 6869.50 IOPS, 26.83 MiB/s [2024-11-20T17:04:24.549Z] 6907.59 IOPS, 26.98 MiB/s [2024-11-20T17:04:24.549Z] 6942.88 IOPS, 27.12 MiB/s [2024-11-20T17:04:24.549Z] 6975.84 IOPS, 27.25 MiB/s [2024-11-20T17:04:24.549Z] 7009.77 IOPS, 27.38 MiB/s [2024-11-20T17:04:24.549Z] 7043.02 IOPS, 27.51 MiB/s [2024-11-20T17:04:24.549Z] 7074.22 IOPS, 27.63 MiB/s [2024-11-20T17:04:24.549Z] 7104.15 IOPS, 27.75 MiB/s [2024-11-20T17:04:24.549Z] Received shutdown signal, test time was about 55.888304 seconds 00:19:06.013 00:19:06.013 Latency(us) 00:19:06.013 [2024-11-20T17:04:24.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.013 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:06.013 Verification LBA range: start 0x0 length 0x4000 00:19:06.013 Nvme0n1 : 55.89 7126.18 27.84 0.00 0.00 17930.47 128.47 7046430.72 00:19:06.013 [2024-11-20T17:04:24.549Z] =================================================================================================================== 00:19:06.013 [2024-11-20T17:04:24.549Z] Total : 7126.18 27.84 0.00 0.00 17930.47 128.47 7046430.72 00:19:06.013 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:06.272 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:06.272 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:06.272 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:06.272 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:06.272 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:19:06.272 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:06.272 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:19:06.272 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:06.272 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:06.272 rmmod nvme_tcp 00:19:06.531 rmmod nvme_fabrics 00:19:06.531 rmmod nvme_keyring 00:19:06.531 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:06.531 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:19:06.531 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:19:06.531 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80837 ']' 00:19:06.531 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80837 00:19:06.531 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80837 ']' 00:19:06.532 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80837 00:19:06.532 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:19:06.532 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:06.532 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80837 00:19:06.532 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:06.532 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:06.532 killing process with pid 80837 00:19:06.532 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80837' 00:19:06.532 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80837 00:19:06.532 17:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80837 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:06.791 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.050 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:19:07.050 00:19:07.050 real 1m2.763s 00:19:07.050 user 2m54.114s 00:19:07.050 sys 0m18.791s 00:19:07.050 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.050 17:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:07.050 ************************************ 00:19:07.050 END TEST nvmf_host_multipath 00:19:07.050 ************************************ 00:19:07.050 17:04:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:07.050 17:04:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:07.050 17:04:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.050 17:04:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.050 ************************************ 00:19:07.050 START TEST nvmf_timeout 00:19:07.050 ************************************ 00:19:07.050 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:07.050 * Looking for test storage... 00:19:07.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:07.050 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:07.050 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:19:07.051 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:07.051 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:07.051 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:07.051 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:07.051 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:07.051 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.051 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:19:07.051 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:19:07.051 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:19:07.051 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:19:07.051 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:19:07.051 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:19:07.051 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:07.051 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:19:07.051 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:19:07.051 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:07.051 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.311 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:19:07.311 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:19:07.311 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.311 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:19:07.311 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:19:07.311 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:19:07.311 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:19:07.311 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.311 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:19:07.311 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:19:07.311 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.311 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:07.311 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:19:07.311 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.311 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:07.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.311 --rc genhtml_branch_coverage=1 00:19:07.311 --rc genhtml_function_coverage=1 00:19:07.311 --rc genhtml_legend=1 00:19:07.311 --rc geninfo_all_blocks=1 00:19:07.311 --rc geninfo_unexecuted_blocks=1 00:19:07.311 00:19:07.311 ' 00:19:07.311 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:07.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.311 --rc genhtml_branch_coverage=1 00:19:07.311 --rc genhtml_function_coverage=1 00:19:07.311 --rc genhtml_legend=1 00:19:07.311 --rc geninfo_all_blocks=1 00:19:07.311 --rc geninfo_unexecuted_blocks=1 00:19:07.311 00:19:07.311 ' 00:19:07.311 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:07.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.311 --rc genhtml_branch_coverage=1 00:19:07.311 --rc genhtml_function_coverage=1 00:19:07.311 --rc genhtml_legend=1 00:19:07.311 --rc geninfo_all_blocks=1 00:19:07.311 --rc geninfo_unexecuted_blocks=1 00:19:07.311 00:19:07.311 ' 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:07.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.312 --rc genhtml_branch_coverage=1 00:19:07.312 --rc genhtml_function_coverage=1 00:19:07.312 --rc genhtml_legend=1 00:19:07.312 --rc geninfo_all_blocks=1 00:19:07.312 --rc geninfo_unexecuted_blocks=1 00:19:07.312 00:19:07.312 ' 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:07.312 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:07.312 Cannot find device "nvmf_init_br" 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:07.312 Cannot find device "nvmf_init_br2" 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:07.312 Cannot find device "nvmf_tgt_br" 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:07.312 Cannot find device "nvmf_tgt_br2" 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:07.312 Cannot find device "nvmf_init_br" 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:07.312 Cannot find device "nvmf_init_br2" 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:07.312 Cannot find device "nvmf_tgt_br" 00:19:07.312 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:19:07.313 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:07.313 Cannot find device "nvmf_tgt_br2" 00:19:07.313 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:19:07.313 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:07.313 Cannot find device "nvmf_br" 00:19:07.313 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:19:07.313 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:07.313 Cannot find device "nvmf_init_if" 00:19:07.313 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:19:07.313 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:07.313 Cannot find device "nvmf_init_if2" 00:19:07.313 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:19:07.313 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:07.313 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:07.313 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:19:07.313 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:07.313 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:07.313 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:19:07.313 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:07.313 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:07.313 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:07.313 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:07.313 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:07.313 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:07.313 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:07.313 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:07.313 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:07.572 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:07.572 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:19:07.572 00:19:07.572 --- 10.0.0.3 ping statistics --- 00:19:07.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.572 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:07.572 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:07.572 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:19:07.572 00:19:07.572 --- 10.0.0.4 ping statistics --- 00:19:07.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.572 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:07.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:07.572 00:19:07.572 --- 10.0.0.1 ping statistics --- 00:19:07.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.572 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:07.572 17:04:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:07.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:19:07.572 00:19:07.572 --- 10.0.0.2 ping statistics --- 00:19:07.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.572 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:19:07.572 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.572 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:19:07.572 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:07.572 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.572 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:07.572 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:07.572 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.572 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:07.572 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:07.572 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:07.572 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:07.572 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:07.572 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:07.572 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=82065 00:19:07.572 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:07.572 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 82065 00:19:07.572 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82065 ']' 00:19:07.572 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.572 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.572 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.572 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.572 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:07.572 [2024-11-20 17:04:26.095776] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:19:07.572 [2024-11-20 17:04:26.095921] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.831 [2024-11-20 17:04:26.257612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:07.831 [2024-11-20 17:04:26.321272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.831 [2024-11-20 17:04:26.321333] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.831 [2024-11-20 17:04:26.321347] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.831 [2024-11-20 17:04:26.321358] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.831 [2024-11-20 17:04:26.321367] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.831 [2024-11-20 17:04:26.322628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.831 [2024-11-20 17:04:26.322642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.090 [2024-11-20 17:04:26.382553] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:08.090 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.090 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:08.090 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:08.090 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:08.090 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:08.090 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.090 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:08.090 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:08.349 [2024-11-20 17:04:26.764441] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.349 17:04:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:08.608 Malloc0 00:19:08.608 17:04:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:08.867 17:04:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:09.126 17:04:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:09.384 [2024-11-20 17:04:27.806774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:09.384 17:04:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:09.384 17:04:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82112 00:19:09.384 17:04:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82112 /var/tmp/bdevperf.sock 00:19:09.384 17:04:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82112 ']' 00:19:09.384 17:04:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:09.384 17:04:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:09.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:09.384 17:04:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:09.385 17:04:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:09.385 17:04:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:09.385 [2024-11-20 17:04:27.867729] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:19:09.385 [2024-11-20 17:04:27.867822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82112 ] 00:19:09.645 [2024-11-20 17:04:28.017248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.645 [2024-11-20 17:04:28.078320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.645 [2024-11-20 17:04:28.135965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:10.581 17:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:10.581 17:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:10.581 17:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:10.839 17:04:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:11.097 NVMe0n1 00:19:11.097 17:04:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82130 00:19:11.097 17:04:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:11.097 17:04:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:11.097 Running I/O for 10 seconds... 00:19:12.034 17:04:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:12.296 7189.00 IOPS, 28.08 MiB/s [2024-11-20T17:04:30.832Z] [2024-11-20 17:04:30.753951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.296 [2024-11-20 17:04:30.754029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.296 [2024-11-20 17:04:30.754052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.296 [2024-11-20 17:04:30.754063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.296 [2024-11-20 17:04:30.754075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.296 [2024-11-20 17:04:30.754084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.296 [2024-11-20 17:04:30.754095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.296 [2024-11-20 17:04:30.754103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.296 [2024-11-20 17:04:30.754114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.296 [2024-11-20 17:04:30.754123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.296 [2024-11-20 17:04:30.754134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.296 [2024-11-20 17:04:30.754143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.296 [2024-11-20 17:04:30.754153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.296 [2024-11-20 17:04:30.754178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.296 [2024-11-20 17:04:30.754189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.296 [2024-11-20 17:04:30.754198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.296 [2024-11-20 17:04:30.754208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.296 [2024-11-20 17:04:30.754217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.297 [2024-11-20 17:04:30.754514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.297 [2024-11-20 17:04:30.754533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.297 [2024-11-20 17:04:30.754689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.754989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.297 [2024-11-20 17:04:30.754998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.297 [2024-11-20 17:04:30.755008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.298 [2024-11-20 17:04:30.755733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.298 [2024-11-20 17:04:30.755741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.755751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.755760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.755778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.755787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.755797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.755806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.755816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.755825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.755835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.755854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.755866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.755875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.755886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.755894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.755904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.755913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.755923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.755932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.755942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.755950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.755961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.755970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.755980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.755989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.755999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.756018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.756037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.756056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.756075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.756098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.756118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.756137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.756156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.756175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.756193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.756212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.756231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.756249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.756303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.756325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.756344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.756364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.756385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.756404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.756424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.756448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.756468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.756487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.299 [2024-11-20 17:04:30.756506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:66576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.299 [2024-11-20 17:04:30.756515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.300 [2024-11-20 17:04:30.756526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.300 [2024-11-20 17:04:30.756535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.300 [2024-11-20 17:04:30.756546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.300 [2024-11-20 17:04:30.756554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.300 [2024-11-20 17:04:30.756565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.300 [2024-11-20 17:04:30.756573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.300 [2024-11-20 17:04:30.756584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.300 [2024-11-20 17:04:30.756593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.300 [2024-11-20 17:04:30.756603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.300 [2024-11-20 17:04:30.756612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.300 [2024-11-20 17:04:30.756629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.300 [2024-11-20 17:04:30.756638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.300 [2024-11-20 17:04:30.756650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.300 [2024-11-20 17:04:30.756658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.300 [2024-11-20 17:04:30.756681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ee000 is same with the state(6) to be set 00:19:12.300 [2024-11-20 17:04:30.756694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:12.300 [2024-11-20 17:04:30.756701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:12.300 [2024-11-20 17:04:30.756709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66640 len:8 PRP1 0x0 PRP2 0x0 00:19:12.300 [2024-11-20 17:04:30.756718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.300 [2024-11-20 17:04:30.757022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:12.300 [2024-11-20 17:04:30.757101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1980e50 (9): Bad file descriptor 00:19:12.300 [2024-11-20 17:04:30.757205] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.300 [2024-11-20 17:04:30.757226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1980e50 with addr=10.0.0.3, port=4420 00:19:12.300 [2024-11-20 17:04:30.757238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980e50 is same with the state(6) to be set 00:19:12.300 [2024-11-20 17:04:30.757261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1980e50 (9): Bad file descriptor 00:19:12.300 [2024-11-20 17:04:30.757278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:12.300 [2024-11-20 17:04:30.757287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:12.300 [2024-11-20 17:04:30.757298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:12.300 [2024-11-20 17:04:30.757309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:12.300 [2024-11-20 17:04:30.757319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:12.300 17:04:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:14.173 4106.50 IOPS, 16.04 MiB/s [2024-11-20T17:04:32.968Z] 2737.67 IOPS, 10.69 MiB/s [2024-11-20T17:04:32.968Z] [2024-11-20 17:04:32.757579] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:14.432 [2024-11-20 17:04:32.757687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1980e50 with addr=10.0.0.3, port=4420 00:19:14.432 [2024-11-20 17:04:32.757723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980e50 is same with the state(6) to be set 00:19:14.432 [2024-11-20 17:04:32.757751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1980e50 (9): Bad file descriptor 00:19:14.432 [2024-11-20 17:04:32.757770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:14.432 [2024-11-20 17:04:32.757779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:14.432 [2024-11-20 17:04:32.757790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:14.432 [2024-11-20 17:04:32.757801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:14.432 [2024-11-20 17:04:32.757812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:14.432 17:04:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:14.432 17:04:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:14.432 17:04:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:14.691 17:04:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:14.691 17:04:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:14.691 17:04:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:14.691 17:04:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:14.950 17:04:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:14.950 17:04:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:16.157 2053.25 IOPS, 8.02 MiB/s [2024-11-20T17:04:34.954Z] 1642.60 IOPS, 6.42 MiB/s [2024-11-20T17:04:34.954Z] [2024-11-20 17:04:34.757997] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:16.418 [2024-11-20 17:04:34.758070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1980e50 with addr=10.0.0.3, port=4420 00:19:16.418 [2024-11-20 17:04:34.758087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980e50 is same with the state(6) to be set 00:19:16.418 [2024-11-20 17:04:34.758114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1980e50 (9): Bad file descriptor 00:19:16.418 [2024-11-20 17:04:34.758134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:16.418 [2024-11-20 17:04:34.758145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:16.418 [2024-11-20 17:04:34.758156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:16.418 [2024-11-20 17:04:34.758167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:16.418 [2024-11-20 17:04:34.758179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:18.289 1368.83 IOPS, 5.35 MiB/s [2024-11-20T17:04:36.825Z] 1173.29 IOPS, 4.58 MiB/s [2024-11-20T17:04:36.825Z] [2024-11-20 17:04:36.758365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:18.289 [2024-11-20 17:04:36.758422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:18.289 [2024-11-20 17:04:36.758433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:18.289 [2024-11-20 17:04:36.758442] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:19:18.289 [2024-11-20 17:04:36.758455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:19.485 1026.62 IOPS, 4.01 MiB/s 00:19:19.485 Latency(us) 00:19:19.485 [2024-11-20T17:04:38.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.485 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:19.485 Verification LBA range: start 0x0 length 0x4000 00:19:19.485 NVMe0n1 : 8.14 1008.62 3.94 15.72 0.00 124773.14 3932.16 7015926.69 00:19:19.485 [2024-11-20T17:04:38.021Z] =================================================================================================================== 00:19:19.485 [2024-11-20T17:04:38.021Z] Total : 1008.62 3.94 15.72 0.00 124773.14 3932.16 7015926.69 00:19:19.485 { 00:19:19.485 "results": [ 00:19:19.485 { 00:19:19.485 "job": "NVMe0n1", 00:19:19.485 "core_mask": "0x4", 00:19:19.485 "workload": "verify", 00:19:19.485 "status": "finished", 00:19:19.485 "verify_range": { 00:19:19.485 "start": 0, 00:19:19.485 "length": 16384 00:19:19.485 }, 00:19:19.485 "queue_depth": 128, 00:19:19.485 "io_size": 4096, 00:19:19.485 "runtime": 8.142836, 00:19:19.485 "iops": 1008.6166539520137, 00:19:19.485 "mibps": 3.9399088045000537, 00:19:19.485 "io_failed": 128, 00:19:19.485 "io_timeout": 0, 00:19:19.485 "avg_latency_us": 124773.13583415985, 00:19:19.485 "min_latency_us": 3932.16, 00:19:19.485 "max_latency_us": 7015926.69090909 00:19:19.485 } 00:19:19.485 ], 00:19:19.485 "core_count": 1 00:19:19.485 } 00:19:20.051 17:04:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:20.051 17:04:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:20.051 17:04:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:20.326 17:04:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:20.326 17:04:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:20.326 17:04:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:20.326 17:04:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:20.593 17:04:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:20.593 17:04:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82130 00:19:20.593 17:04:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82112 00:19:20.593 17:04:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82112 ']' 00:19:20.593 17:04:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82112 00:19:20.593 17:04:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:20.593 17:04:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.593 17:04:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82112 00:19:20.593 17:04:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:20.593 17:04:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:20.593 killing process with pid 82112 00:19:20.593 17:04:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82112' 00:19:20.593 17:04:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82112 00:19:20.593 Received shutdown signal, test time was about 9.318639 seconds 00:19:20.593 00:19:20.593 Latency(us) 00:19:20.593 [2024-11-20T17:04:39.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.593 [2024-11-20T17:04:39.129Z] =================================================================================================================== 00:19:20.593 [2024-11-20T17:04:39.129Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:20.593 17:04:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82112 00:19:20.593 17:04:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:20.861 [2024-11-20 17:04:39.355600] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:20.861 17:04:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82257 00:19:20.861 17:04:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:20.861 17:04:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82257 /var/tmp/bdevperf.sock 00:19:20.861 17:04:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82257 ']' 00:19:20.861 17:04:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:20.861 17:04:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:20.861 17:04:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:20.861 17:04:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.861 17:04:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:21.124 [2024-11-20 17:04:39.426307] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:19:21.124 [2024-11-20 17:04:39.426404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82257 ] 00:19:21.124 [2024-11-20 17:04:39.569506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.124 [2024-11-20 17:04:39.627364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.382 [2024-11-20 17:04:39.684278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:21.948 17:04:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.948 17:04:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:21.948 17:04:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:22.206 17:04:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:22.465 NVMe0n1 00:19:22.724 17:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:22.724 17:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82282 00:19:22.724 17:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:22.724 Running I/O for 10 seconds... 00:19:23.660 17:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:23.921 7444.00 IOPS, 29.08 MiB/s [2024-11-20T17:04:42.457Z] [2024-11-20 17:04:42.232728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.921 [2024-11-20 17:04:42.232805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.921 [2024-11-20 17:04:42.232854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.921 [2024-11-20 17:04:42.232868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.921 [2024-11-20 17:04:42.232879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.921 [2024-11-20 17:04:42.232888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.921 [2024-11-20 17:04:42.232899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.921 [2024-11-20 17:04:42.232907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.921 [2024-11-20 17:04:42.232917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:67192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.921 [2024-11-20 17:04:42.232925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.921 [2024-11-20 17:04:42.232936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.921 [2024-11-20 17:04:42.232945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.921 [2024-11-20 17:04:42.232955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.921 [2024-11-20 17:04:42.232963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.921 [2024-11-20 17:04:42.232973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.921 [2024-11-20 17:04:42.232981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.921 [2024-11-20 17:04:42.232991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.921 [2024-11-20 17:04:42.232999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.921 [2024-11-20 17:04:42.233009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.921 [2024-11-20 17:04:42.233017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.921 [2024-11-20 17:04:42.233027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.921 [2024-11-20 17:04:42.233035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.921 [2024-11-20 17:04:42.233061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.921 [2024-11-20 17:04:42.233085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.921 [2024-11-20 17:04:42.233113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.921 [2024-11-20 17:04:42.233122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.921 [2024-11-20 17:04:42.233133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.921 [2024-11-20 17:04:42.233142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:67320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:67360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:67376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:67384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:67440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:67464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:67528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.922 [2024-11-20 17:04:42.233965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.922 [2024-11-20 17:04:42.233976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.923 [2024-11-20 17:04:42.233986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.233997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.923 [2024-11-20 17:04:42.234006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.923 [2024-11-20 17:04:42.234027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.923 [2024-11-20 17:04:42.234047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.923 [2024-11-20 17:04:42.234067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.923 [2024-11-20 17:04:42.234087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.923 [2024-11-20 17:04:42.234107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.923 [2024-11-20 17:04:42.234128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.923 [2024-11-20 17:04:42.234149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.923 [2024-11-20 17:04:42.234169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.923 [2024-11-20 17:04:42.234191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.923 [2024-11-20 17:04:42.234211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.923 [2024-11-20 17:04:42.234231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.923 [2024-11-20 17:04:42.234267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.923 [2024-11-20 17:04:42.234286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.923 [2024-11-20 17:04:42.234306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.923 [2024-11-20 17:04:42.234326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.923 [2024-11-20 17:04:42.234345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.923 [2024-11-20 17:04:42.234364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.923 [2024-11-20 17:04:42.234384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.923 [2024-11-20 17:04:42.234405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:66752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.923 [2024-11-20 17:04:42.234426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.923 [2024-11-20 17:04:42.234446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.923 [2024-11-20 17:04:42.234466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.923 [2024-11-20 17:04:42.234486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.923 [2024-11-20 17:04:42.234506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:66792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.923 [2024-11-20 17:04:42.234526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.923 [2024-11-20 17:04:42.234546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.923 [2024-11-20 17:04:42.234566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:66816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.923 [2024-11-20 17:04:42.234586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.923 [2024-11-20 17:04:42.234607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.923 [2024-11-20 17:04:42.234627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.923 [2024-11-20 17:04:42.234647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.923 [2024-11-20 17:04:42.234666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.923 [2024-11-20 17:04:42.234686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:66848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.923 [2024-11-20 17:04:42.234706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.923 [2024-11-20 17:04:42.234726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.923 [2024-11-20 17:04:42.234746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.923 [2024-11-20 17:04:42.234766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.923 [2024-11-20 17:04:42.234776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.923 [2024-11-20 17:04:42.234802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.234813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:66888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.234823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.234834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:66896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.234843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.234863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.924 [2024-11-20 17:04:42.234874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.234885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.234895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.234906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.234915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.234935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.234945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.234956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.234965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.234976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.234986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.234997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:66944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:67064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.924 [2024-11-20 17:04:42.235555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1b000 is same with the state(6) to be set 00:19:23.924 [2024-11-20 17:04:42.235586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:23.924 [2024-11-20 17:04:42.235594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:23.924 [2024-11-20 17:04:42.235602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67152 len:8 PRP1 0x0 PRP2 0x0 00:19:23.924 [2024-11-20 17:04:42.235612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.924 [2024-11-20 17:04:42.235785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.924 [2024-11-20 17:04:42.235806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.924 [2024-11-20 17:04:42.235824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.924 [2024-11-20 17:04:42.235834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.925 [2024-11-20 17:04:42.235843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.925 [2024-11-20 17:04:42.235852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ade50 is same with the state(6) to be set 00:19:23.925 [2024-11-20 17:04:42.236091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:23.925 [2024-11-20 17:04:42.236115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ade50 (9): Bad file descriptor 00:19:23.925 [2024-11-20 17:04:42.236211] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:23.925 [2024-11-20 17:04:42.236233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ade50 with addr=10.0.0.3, port=4420 00:19:23.925 [2024-11-20 17:04:42.236244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ade50 is same with the state(6) to be set 00:19:23.925 [2024-11-20 17:04:42.236262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ade50 (9): Bad file descriptor 00:19:23.925 [2024-11-20 17:04:42.236278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:23.925 [2024-11-20 17:04:42.236318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:23.925 [2024-11-20 17:04:42.236333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:23.925 [2024-11-20 17:04:42.236345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:23.925 [2024-11-20 17:04:42.236356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:23.925 17:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:24.859 4170.50 IOPS, 16.29 MiB/s [2024-11-20T17:04:43.395Z] [2024-11-20 17:04:43.236468] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:24.859 [2024-11-20 17:04:43.236555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ade50 with addr=10.0.0.3, port=4420 00:19:24.859 [2024-11-20 17:04:43.236572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ade50 is same with the state(6) to be set 00:19:24.859 [2024-11-20 17:04:43.236594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ade50 (9): Bad file descriptor 00:19:24.859 [2024-11-20 17:04:43.236614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:24.859 [2024-11-20 17:04:43.236624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:24.859 [2024-11-20 17:04:43.236650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:24.859 [2024-11-20 17:04:43.236691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:24.859 [2024-11-20 17:04:43.236703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:24.859 17:04:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:25.117 [2024-11-20 17:04:43.520388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:25.117 17:04:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82282 00:19:25.943 2780.33 IOPS, 10.86 MiB/s [2024-11-20T17:04:44.479Z] [2024-11-20 17:04:44.247509] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:27.811 2085.25 IOPS, 8.15 MiB/s [2024-11-20T17:04:47.283Z] 3442.20 IOPS, 13.45 MiB/s [2024-11-20T17:04:48.216Z] 4533.83 IOPS, 17.71 MiB/s [2024-11-20T17:04:49.153Z] 5265.57 IOPS, 20.57 MiB/s [2024-11-20T17:04:50.528Z] 5841.38 IOPS, 22.82 MiB/s [2024-11-20T17:04:51.462Z] 6282.11 IOPS, 24.54 MiB/s [2024-11-20T17:04:51.462Z] 6637.10 IOPS, 25.93 MiB/s 00:19:32.926 Latency(us) 00:19:32.926 [2024-11-20T17:04:51.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.926 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:32.926 Verification LBA range: start 0x0 length 0x4000 00:19:32.926 NVMe0n1 : 10.01 6638.48 25.93 0.00 0.00 19239.09 2100.13 3019898.88 00:19:32.926 [2024-11-20T17:04:51.462Z] =================================================================================================================== 00:19:32.926 [2024-11-20T17:04:51.462Z] Total : 6638.48 25.93 0.00 0.00 19239.09 2100.13 3019898.88 00:19:32.926 { 00:19:32.926 "results": [ 00:19:32.926 { 00:19:32.926 "job": "NVMe0n1", 00:19:32.926 "core_mask": "0x4", 00:19:32.926 "workload": "verify", 00:19:32.926 "status": "finished", 00:19:32.926 "verify_range": { 00:19:32.926 "start": 0, 00:19:32.926 "length": 16384 00:19:32.926 }, 00:19:32.926 "queue_depth": 128, 00:19:32.926 "io_size": 4096, 00:19:32.926 "runtime": 10.00876, 00:19:32.926 "iops": 6638.484687413826, 00:19:32.926 "mibps": 25.931580810210257, 00:19:32.926 "io_failed": 0, 00:19:32.926 "io_timeout": 0, 00:19:32.926 "avg_latency_us": 19239.09077018853, 00:19:32.926 "min_latency_us": 2100.130909090909, 00:19:32.926 "max_latency_us": 3019898.88 00:19:32.926 } 00:19:32.926 ], 00:19:32.926 "core_count": 1 00:19:32.926 } 00:19:32.927 17:04:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82387 00:19:32.927 17:04:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:32.927 17:04:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:32.927 Running I/O for 10 seconds... 00:19:33.863 17:04:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:33.863 7076.00 IOPS, 27.64 MiB/s [2024-11-20T17:04:52.399Z] [2024-11-20 17:04:52.378900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.863 [2024-11-20 17:04:52.378975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.863 [2024-11-20 17:04:52.379014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.863 [2024-11-20 17:04:52.379025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.863 [2024-11-20 17:04:52.379038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.864 [2024-11-20 17:04:52.379861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.864 [2024-11-20 17:04:52.379872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.379894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.379908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.379918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.379929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.379937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.379949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.379958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.379969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.379978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.379989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.379998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.865 [2024-11-20 17:04:52.380699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.865 [2024-11-20 17:04:52.380712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.866 [2024-11-20 17:04:52.380722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.380733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.866 [2024-11-20 17:04:52.380742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.380753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.866 [2024-11-20 17:04:52.380762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.380773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.866 [2024-11-20 17:04:52.380782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.380793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.866 [2024-11-20 17:04:52.380802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.380813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.866 [2024-11-20 17:04:52.380822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.380844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.380853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.380865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.380874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.380885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.380894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.380906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.380915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.380926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.380935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.380946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.380956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.380967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.380976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.380988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.380997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.381017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.381038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.381059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.381079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.381099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.381119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.381140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.866 [2024-11-20 17:04:52.381160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.866 [2024-11-20 17:04:52.381181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.381201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.381222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.381242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.381262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.381283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.381303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.381323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.866 [2024-11-20 17:04:52.381344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.381365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.381385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.381405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.381425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.381445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.381465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.381486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.381506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.866 [2024-11-20 17:04:52.381517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.866 [2024-11-20 17:04:52.381526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.867 [2024-11-20 17:04:52.381537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.867 [2024-11-20 17:04:52.381547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.867 [2024-11-20 17:04:52.381558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.867 [2024-11-20 17:04:52.381567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.867 [2024-11-20 17:04:52.381578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.867 [2024-11-20 17:04:52.381587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.867 [2024-11-20 17:04:52.381598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.867 [2024-11-20 17:04:52.381608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.867 [2024-11-20 17:04:52.381619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.867 [2024-11-20 17:04:52.381637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.867 [2024-11-20 17:04:52.381649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.867 [2024-11-20 17:04:52.381658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.867 [2024-11-20 17:04:52.381669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1c180 is same with the state(6) to be set 00:19:33.867 [2024-11-20 17:04:52.381681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:33.867 [2024-11-20 17:04:52.381689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:33.867 [2024-11-20 17:04:52.381698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64976 len:8 PRP1 0x0 PRP2 0x0 00:19:33.867 [2024-11-20 17:04:52.381707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.867 [2024-11-20 17:04:52.382005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:33.867 [2024-11-20 17:04:52.382084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ade50 (9): Bad file descriptor 00:19:33.867 [2024-11-20 17:04:52.382194] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:33.867 [2024-11-20 17:04:52.382217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ade50 with addr=10.0.0.3, port=4420 00:19:33.867 [2024-11-20 17:04:52.382228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ade50 is same with the state(6) to be set 00:19:33.867 [2024-11-20 17:04:52.382247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ade50 (9): Bad file descriptor 00:19:33.867 [2024-11-20 17:04:52.382265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:33.867 [2024-11-20 17:04:52.382275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:33.867 [2024-11-20 17:04:52.382285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:33.867 [2024-11-20 17:04:52.382297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:33.867 [2024-11-20 17:04:52.382308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:34.125 17:04:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:35.060 4042.50 IOPS, 15.79 MiB/s [2024-11-20T17:04:53.596Z] [2024-11-20 17:04:53.382436] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.060 [2024-11-20 17:04:53.382533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ade50 with addr=10.0.0.3, port=4420 00:19:35.060 [2024-11-20 17:04:53.382550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ade50 is same with the state(6) to be set 00:19:35.060 [2024-11-20 17:04:53.382575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ade50 (9): Bad file descriptor 00:19:35.060 [2024-11-20 17:04:53.382595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:35.060 [2024-11-20 17:04:53.382605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:35.060 [2024-11-20 17:04:53.382616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:35.060 [2024-11-20 17:04:53.382629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:35.060 [2024-11-20 17:04:53.382641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:35.995 2695.00 IOPS, 10.53 MiB/s [2024-11-20T17:04:54.531Z] [2024-11-20 17:04:54.382769] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.995 [2024-11-20 17:04:54.382893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ade50 with addr=10.0.0.3, port=4420 00:19:35.995 [2024-11-20 17:04:54.382919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ade50 is same with the state(6) to be set 00:19:35.995 [2024-11-20 17:04:54.382952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ade50 (9): Bad file descriptor 00:19:35.995 [2024-11-20 17:04:54.382974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:35.995 [2024-11-20 17:04:54.382984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:35.995 [2024-11-20 17:04:54.382996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:35.995 [2024-11-20 17:04:54.383008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:35.995 [2024-11-20 17:04:54.383020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:36.932 2021.25 IOPS, 7.90 MiB/s [2024-11-20T17:04:55.468Z] [2024-11-20 17:04:55.386530] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:36.932 [2024-11-20 17:04:55.386638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ade50 with addr=10.0.0.3, port=4420 00:19:36.932 [2024-11-20 17:04:55.386654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ade50 is same with the state(6) to be set 00:19:36.932 [2024-11-20 17:04:55.386931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ade50 (9): Bad file descriptor 00:19:36.932 [2024-11-20 17:04:55.387185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:36.932 [2024-11-20 17:04:55.387208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:36.932 [2024-11-20 17:04:55.387221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:36.932 [2024-11-20 17:04:55.387234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:36.932 [2024-11-20 17:04:55.387246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:36.932 17:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:37.216 [2024-11-20 17:04:55.644852] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:37.216 17:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82387 00:19:38.039 1617.00 IOPS, 6.32 MiB/s [2024-11-20T17:04:56.575Z] [2024-11-20 17:04:56.418415] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:19:39.916 2678.00 IOPS, 10.46 MiB/s [2024-11-20T17:04:59.387Z] 3676.00 IOPS, 14.36 MiB/s [2024-11-20T17:05:00.322Z] 4419.50 IOPS, 17.26 MiB/s [2024-11-20T17:05:01.256Z] 4999.56 IOPS, 19.53 MiB/s [2024-11-20T17:05:01.515Z] 5469.20 IOPS, 21.36 MiB/s 00:19:42.979 Latency(us) 00:19:42.979 [2024-11-20T17:05:01.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.979 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:42.979 Verification LBA range: start 0x0 length 0x4000 00:19:42.979 NVMe0n1 : 10.01 5475.88 21.39 3723.38 0.00 13886.55 681.43 3019898.88 00:19:42.979 [2024-11-20T17:05:01.515Z] =================================================================================================================== 00:19:42.979 [2024-11-20T17:05:01.515Z] Total : 5475.88 21.39 3723.38 0.00 13886.55 0.00 3019898.88 00:19:42.979 { 00:19:42.979 "results": [ 00:19:42.979 { 00:19:42.979 "job": "NVMe0n1", 00:19:42.979 "core_mask": "0x4", 00:19:42.979 "workload": "verify", 00:19:42.979 "status": "finished", 00:19:42.979 "verify_range": { 00:19:42.979 "start": 0, 00:19:42.979 "length": 16384 00:19:42.979 }, 00:19:42.979 "queue_depth": 128, 00:19:42.979 "io_size": 4096, 00:19:42.979 "runtime": 10.009712, 00:19:42.979 "iops": 5475.881823572946, 00:19:42.979 "mibps": 21.39016337333182, 00:19:42.979 "io_failed": 37270, 00:19:42.979 "io_timeout": 0, 00:19:42.979 "avg_latency_us": 13886.551749014217, 00:19:42.979 "min_latency_us": 681.4254545454545, 00:19:42.979 "max_latency_us": 3019898.88 00:19:42.979 } 00:19:42.979 ], 00:19:42.979 "core_count": 1 00:19:42.979 } 00:19:42.979 17:05:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82257 00:19:42.979 17:05:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82257 ']' 00:19:42.979 17:05:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82257 00:19:42.979 17:05:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:42.979 17:05:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.979 17:05:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82257 00:19:42.979 17:05:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:42.979 17:05:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:42.979 killing process with pid 82257 00:19:42.979 17:05:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82257' 00:19:42.979 17:05:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82257 00:19:42.979 Received shutdown signal, test time was about 10.000000 seconds 00:19:42.979 00:19:42.979 Latency(us) 00:19:42.979 [2024-11-20T17:05:01.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.979 [2024-11-20T17:05:01.515Z] =================================================================================================================== 00:19:42.979 [2024-11-20T17:05:01.515Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:42.979 17:05:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82257 00:19:42.979 17:05:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82501 00:19:42.979 17:05:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:42.979 17:05:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82501 /var/tmp/bdevperf.sock 00:19:42.979 17:05:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82501 ']' 00:19:42.979 17:05:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.979 17:05:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.979 17:05:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.979 17:05:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.979 17:05:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:43.238 [2024-11-20 17:05:01.571423] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:19:43.238 [2024-11-20 17:05:01.571554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82501 ] 00:19:43.238 [2024-11-20 17:05:01.716365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.238 [2024-11-20 17:05:01.771433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.496 [2024-11-20 17:05:01.826846] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:44.430 17:05:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.430 17:05:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:44.430 17:05:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82517 00:19:44.430 17:05:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82501 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:44.430 17:05:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:44.430 17:05:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:44.996 NVMe0n1 00:19:44.996 17:05:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82557 00:19:44.996 17:05:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:44.996 17:05:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:19:44.996 Running I/O for 10 seconds... 00:19:45.931 17:05:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:46.193 15494.00 IOPS, 60.52 MiB/s [2024-11-20T17:05:04.729Z] [2024-11-20 17:05:04.514746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.193 [2024-11-20 17:05:04.514816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.193 [2024-11-20 17:05:04.514856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.193 [2024-11-20 17:05:04.514866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.193 [2024-11-20 17:05:04.514876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.193 [2024-11-20 17:05:04.514884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.193 [2024-11-20 17:05:04.514893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.193 [2024-11-20 17:05:04.514901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.193 [2024-11-20 17:05:04.514909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.193 [2024-11-20 17:05:04.514918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.193 [2024-11-20 17:05:04.514926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.193 [2024-11-20 17:05:04.514934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.193 [2024-11-20 17:05:04.514942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.193 [2024-11-20 17:05:04.514951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.193 [2024-11-20 17:05:04.514959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.193 [2024-11-20 17:05:04.514967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.193 [2024-11-20 17:05:04.514975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.193 [2024-11-20 17:05:04.514983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.193 [2024-11-20 17:05:04.514991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.193 [2024-11-20 17:05:04.514999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.194 [2024-11-20 17:05:04.515665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.195 [2024-11-20 17:05:04.515673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.195 [2024-11-20 17:05:04.515682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.195 [2024-11-20 17:05:04.515691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.195 [2024-11-20 17:05:04.515699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.195 [2024-11-20 17:05:04.515707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.195 [2024-11-20 17:05:04.515715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.195 [2024-11-20 17:05:04.515724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.195 [2024-11-20 17:05:04.515732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.195 [2024-11-20 17:05:04.515741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.195 [2024-11-20 17:05:04.515749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.195 [2024-11-20 17:05:04.515757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.195 [2024-11-20 17:05:04.515764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.195 [2024-11-20 17:05:04.515772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.195 [2024-11-20 17:05:04.515780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.195 [2024-11-20 17:05:04.515788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.195 [2024-11-20 17:05:04.515796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.195 [2024-11-20 17:05:04.515805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.195 [2024-11-20 17:05:04.515813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.195 [2024-11-20 17:05:04.515821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.195 [2024-11-20 17:05:04.515839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.195 [2024-11-20 17:05:04.515848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.195 [2024-11-20 17:05:04.515856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.195 [2024-11-20 17:05:04.515864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7ac0 is same with the state(6) to be set 00:19:46.195 [2024-11-20 17:05:04.515926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.515956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.515979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.515991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:54504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:118056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:118088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:115528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:54416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:116528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.195 [2024-11-20 17:05:04.516544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.195 [2024-11-20 17:05:04.516555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.516564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.516576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.516585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.516596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.516606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.516618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.516627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.516638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.516655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.516667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.516676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.516687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.516697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.516708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.516717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.516728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.516738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.516749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.516758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.516769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.516779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.516790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.516800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.516811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.516820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.516845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.516861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.516874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.516883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.516895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.516904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.516916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.516925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.516936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.516945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.516956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.516965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.516976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:124672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.516986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.516997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.517011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.517022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.517031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.517042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.517052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.517063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.517072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.517083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.517093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.517104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.517113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.517124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.517133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.517144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.517153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.517165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.517174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.517185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.517198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.517209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.517219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.517230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.517239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.517250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.517259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.517270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.517280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.517291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.517301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.517312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:88712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.517321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.517333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.517346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.517358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.196 [2024-11-20 17:05:04.517367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.196 [2024-11-20 17:05:04.517378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:71776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:119128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:40072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:33296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:36104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.517985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.517996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.518006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.518017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.518027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.518038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.518047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.518058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.518068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.518079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.518088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.518099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.518108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.518119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.518128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.518139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.518149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.518160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.518169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.518180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:124336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.197 [2024-11-20 17:05:04.518189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.197 [2024-11-20 17:05:04.518200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.198 [2024-11-20 17:05:04.518214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.198 [2024-11-20 17:05:04.518234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.198 [2024-11-20 17:05:04.518254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.198 [2024-11-20 17:05:04.518274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.198 [2024-11-20 17:05:04.518295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.198 [2024-11-20 17:05:04.518315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:121720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.198 [2024-11-20 17:05:04.518345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.198 [2024-11-20 17:05:04.518366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:107888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.198 [2024-11-20 17:05:04.518387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.198 [2024-11-20 17:05:04.518408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.198 [2024-11-20 17:05:04.518428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.198 [2024-11-20 17:05:04.518449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:68680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.198 [2024-11-20 17:05:04.518469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.198 [2024-11-20 17:05:04.518490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.198 [2024-11-20 17:05:04.518510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.198 [2024-11-20 17:05:04.518530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.198 [2024-11-20 17:05:04.518556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.198 [2024-11-20 17:05:04.518576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.198 [2024-11-20 17:05:04.518597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.198 [2024-11-20 17:05:04.518618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.198 [2024-11-20 17:05:04.518640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.198 [2024-11-20 17:05:04.518661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad3ec0 is same with the state(6) to be set 00:19:46.198 [2024-11-20 17:05:04.518683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:46.198 [2024-11-20 17:05:04.518690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:46.198 [2024-11-20 17:05:04.518699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71320 len:8 PRP1 0x0 PRP2 0x0 00:19:46.198 [2024-11-20 17:05:04.518709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:46.198 [2024-11-20 17:05:04.518893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:46.198 [2024-11-20 17:05:04.518916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:46.198 [2024-11-20 17:05:04.518935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:46.198 [2024-11-20 17:05:04.518954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.198 [2024-11-20 17:05:04.518962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66e50 is same with the state(6) to be set 00:19:46.198 [2024-11-20 17:05:04.519210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:46.198 [2024-11-20 17:05:04.519243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa66e50 (9): Bad file descriptor 00:19:46.198 [2024-11-20 17:05:04.519350] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:46.198 [2024-11-20 17:05:04.519382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa66e50 with addr=10.0.0.3, port=4420 00:19:46.198 [2024-11-20 17:05:04.519394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66e50 is same with the state(6) to be set 00:19:46.198 [2024-11-20 17:05:04.519416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa66e50 (9): Bad file descriptor 00:19:46.198 [2024-11-20 17:05:04.519432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:46.198 [2024-11-20 17:05:04.519448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:46.198 [2024-11-20 17:05:04.519459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:46.198 [2024-11-20 17:05:04.519471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:46.198 [2024-11-20 17:05:04.519481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:46.198 17:05:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82557 00:19:48.079 8764.00 IOPS, 34.23 MiB/s [2024-11-20T17:05:06.615Z] 5842.67 IOPS, 22.82 MiB/s [2024-11-20T17:05:06.615Z] [2024-11-20 17:05:06.535309] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:48.079 [2024-11-20 17:05:06.535401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa66e50 with addr=10.0.0.3, port=4420 00:19:48.079 [2024-11-20 17:05:06.535420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66e50 is same with the state(6) to be set 00:19:48.079 [2024-11-20 17:05:06.535446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa66e50 (9): Bad file descriptor 00:19:48.079 [2024-11-20 17:05:06.535466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:48.079 [2024-11-20 17:05:06.535477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:48.079 [2024-11-20 17:05:06.535489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:48.079 [2024-11-20 17:05:06.535500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:48.079 [2024-11-20 17:05:06.535512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:49.952 4382.00 IOPS, 17.12 MiB/s [2024-11-20T17:05:08.746Z] 3505.60 IOPS, 13.69 MiB/s [2024-11-20T17:05:08.746Z] [2024-11-20 17:05:08.535691] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:50.210 [2024-11-20 17:05:08.535766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa66e50 with addr=10.0.0.3, port=4420 00:19:50.210 [2024-11-20 17:05:08.535783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66e50 is same with the state(6) to be set 00:19:50.210 [2024-11-20 17:05:08.535807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa66e50 (9): Bad file descriptor 00:19:50.210 [2024-11-20 17:05:08.535840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:50.210 [2024-11-20 17:05:08.535853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:50.210 [2024-11-20 17:05:08.535865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:50.210 [2024-11-20 17:05:08.535878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:50.210 [2024-11-20 17:05:08.535890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:52.079 2921.33 IOPS, 11.41 MiB/s [2024-11-20T17:05:10.616Z] 2504.00 IOPS, 9.78 MiB/s [2024-11-20T17:05:10.616Z] [2024-11-20 17:05:10.535976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:52.080 [2024-11-20 17:05:10.536037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:52.080 [2024-11-20 17:05:10.536065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:52.080 [2024-11-20 17:05:10.536075] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:19:52.080 [2024-11-20 17:05:10.536088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:53.015 2191.00 IOPS, 8.56 MiB/s 00:19:53.015 Latency(us) 00:19:53.015 [2024-11-20T17:05:11.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.015 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:53.015 NVMe0n1 : 8.15 2150.26 8.40 15.70 0.00 58994.48 7804.74 7015926.69 00:19:53.015 [2024-11-20T17:05:11.551Z] =================================================================================================================== 00:19:53.015 [2024-11-20T17:05:11.551Z] Total : 2150.26 8.40 15.70 0.00 58994.48 7804.74 7015926.69 00:19:53.015 { 00:19:53.015 "results": [ 00:19:53.015 { 00:19:53.015 "job": "NVMe0n1", 00:19:53.015 "core_mask": "0x4", 00:19:53.015 "workload": "randread", 00:19:53.015 "status": "finished", 00:19:53.015 "queue_depth": 128, 00:19:53.015 "io_size": 4096, 00:19:53.015 "runtime": 8.151573, 00:19:53.015 "iops": 2150.2598332861644, 00:19:53.015 "mibps": 8.39945247377408, 00:19:53.015 "io_failed": 128, 00:19:53.015 "io_timeout": 0, 00:19:53.015 "avg_latency_us": 58994.476932075624, 00:19:53.015 "min_latency_us": 7804.741818181818, 00:19:53.015 "max_latency_us": 7015926.69090909 00:19:53.015 } 00:19:53.015 ], 00:19:53.015 "core_count": 1 00:19:53.015 } 00:19:53.015 17:05:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:53.015 Attaching 5 probes... 00:19:53.015 1404.295851: reset bdev controller NVMe0 00:19:53.015 1404.374525: reconnect bdev controller NVMe0 00:19:53.015 3420.263041: reconnect delay bdev controller NVMe0 00:19:53.015 3420.302678: reconnect bdev controller NVMe0 00:19:53.015 5420.650940: reconnect delay bdev controller NVMe0 00:19:53.015 5420.674039: reconnect bdev controller NVMe0 00:19:53.015 7421.027015: reconnect delay bdev controller NVMe0 00:19:53.015 7421.047132: reconnect bdev controller NVMe0 00:19:53.015 17:05:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:53.274 17:05:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:53.274 17:05:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82517 00:19:53.274 17:05:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:53.274 17:05:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82501 00:19:53.274 17:05:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82501 ']' 00:19:53.274 17:05:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82501 00:19:53.274 17:05:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:53.274 17:05:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.274 17:05:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82501 00:19:53.274 killing process with pid 82501 00:19:53.274 Received shutdown signal, test time was about 8.217973 seconds 00:19:53.274 00:19:53.274 Latency(us) 00:19:53.274 [2024-11-20T17:05:11.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.274 [2024-11-20T17:05:11.810Z] =================================================================================================================== 00:19:53.274 [2024-11-20T17:05:11.810Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:53.274 17:05:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:53.274 17:05:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:53.274 17:05:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82501' 00:19:53.274 17:05:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82501 00:19:53.274 17:05:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82501 00:19:53.274 17:05:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:53.841 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:53.841 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:19:53.841 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:53.841 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:19:53.841 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:53.841 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:19:53.841 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:53.841 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:53.841 rmmod nvme_tcp 00:19:53.841 rmmod nvme_fabrics 00:19:53.841 rmmod nvme_keyring 00:19:53.841 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:53.841 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:19:53.841 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:19:53.841 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 82065 ']' 00:19:53.841 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 82065 00:19:53.841 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82065 ']' 00:19:53.841 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82065 00:19:53.841 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:53.841 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.841 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82065 00:19:53.841 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.841 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.841 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82065' 00:19:53.841 killing process with pid 82065 00:19:53.841 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82065 00:19:53.841 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82065 00:19:54.100 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:54.100 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:54.100 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:54.100 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:19:54.100 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:19:54.100 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:19:54.100 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:54.100 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:54.100 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:54.100 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:54.100 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:54.100 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:54.100 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:54.100 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:54.100 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:54.100 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:54.100 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:54.100 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:54.100 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:54.100 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:54.100 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:54.358 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:54.358 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:54.358 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.358 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:54.358 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.358 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:19:54.358 00:19:54.358 real 0m47.301s 00:19:54.358 user 2m19.116s 00:19:54.358 sys 0m5.658s 00:19:54.358 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:54.358 17:05:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:54.358 ************************************ 00:19:54.358 END TEST nvmf_timeout 00:19:54.358 ************************************ 00:19:54.358 17:05:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:19:54.358 17:05:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:54.358 00:19:54.358 real 5m10.891s 00:19:54.358 user 13m30.621s 00:19:54.358 sys 1m10.379s 00:19:54.358 17:05:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:54.358 17:05:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.358 ************************************ 00:19:54.358 END TEST nvmf_host 00:19:54.358 ************************************ 00:19:54.358 17:05:12 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:19:54.359 17:05:12 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:19:54.359 ************************************ 00:19:54.359 END TEST nvmf_tcp 00:19:54.359 ************************************ 00:19:54.359 00:19:54.359 real 12m55.900s 00:19:54.359 user 31m5.584s 00:19:54.359 sys 3m10.795s 00:19:54.359 17:05:12 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:54.359 17:05:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:54.359 17:05:12 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:19:54.359 17:05:12 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:54.359 17:05:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:54.359 17:05:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:54.359 17:05:12 -- common/autotest_common.sh@10 -- # set +x 00:19:54.359 ************************************ 00:19:54.359 START TEST nvmf_dif 00:19:54.359 ************************************ 00:19:54.359 17:05:12 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:54.618 * Looking for test storage... 00:19:54.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:54.618 17:05:12 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:54.618 17:05:12 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:19:54.618 17:05:12 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:54.618 17:05:13 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:19:54.618 17:05:13 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:54.618 17:05:13 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:54.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.618 --rc genhtml_branch_coverage=1 00:19:54.618 --rc genhtml_function_coverage=1 00:19:54.618 --rc genhtml_legend=1 00:19:54.618 --rc geninfo_all_blocks=1 00:19:54.618 --rc geninfo_unexecuted_blocks=1 00:19:54.618 00:19:54.618 ' 00:19:54.618 17:05:13 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:54.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.618 --rc genhtml_branch_coverage=1 00:19:54.618 --rc genhtml_function_coverage=1 00:19:54.618 --rc genhtml_legend=1 00:19:54.618 --rc geninfo_all_blocks=1 00:19:54.618 --rc geninfo_unexecuted_blocks=1 00:19:54.618 00:19:54.618 ' 00:19:54.618 17:05:13 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:54.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.618 --rc genhtml_branch_coverage=1 00:19:54.618 --rc genhtml_function_coverage=1 00:19:54.618 --rc genhtml_legend=1 00:19:54.618 --rc geninfo_all_blocks=1 00:19:54.618 --rc geninfo_unexecuted_blocks=1 00:19:54.618 00:19:54.618 ' 00:19:54.618 17:05:13 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:54.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.618 --rc genhtml_branch_coverage=1 00:19:54.618 --rc genhtml_function_coverage=1 00:19:54.618 --rc genhtml_legend=1 00:19:54.618 --rc geninfo_all_blocks=1 00:19:54.618 --rc geninfo_unexecuted_blocks=1 00:19:54.618 00:19:54.618 ' 00:19:54.618 17:05:13 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:54.618 17:05:13 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:19:54.618 17:05:13 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:54.618 17:05:13 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:54.618 17:05:13 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:54.618 17:05:13 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:54.618 17:05:13 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:54.618 17:05:13 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:54.618 17:05:13 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:54.618 17:05:13 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:54.618 17:05:13 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:54.618 17:05:13 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:54.618 17:05:13 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:19:54.618 17:05:13 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:19:54.618 17:05:13 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:54.618 17:05:13 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:54.618 17:05:13 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:54.618 17:05:13 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:54.618 17:05:13 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:54.618 17:05:13 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:54.618 17:05:13 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.619 17:05:13 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.619 17:05:13 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.619 17:05:13 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:19:54.619 17:05:13 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:54.619 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:54.619 17:05:13 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:19:54.619 17:05:13 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:54.619 17:05:13 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:54.619 17:05:13 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:19:54.619 17:05:13 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.619 17:05:13 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:54.619 17:05:13 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:54.619 Cannot find device "nvmf_init_br" 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@162 -- # true 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:54.619 Cannot find device "nvmf_init_br2" 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@163 -- # true 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:54.619 Cannot find device "nvmf_tgt_br" 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@164 -- # true 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:54.619 Cannot find device "nvmf_tgt_br2" 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@165 -- # true 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:54.619 Cannot find device "nvmf_init_br" 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@166 -- # true 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:54.619 Cannot find device "nvmf_init_br2" 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@167 -- # true 00:19:54.619 17:05:13 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:54.619 Cannot find device "nvmf_tgt_br" 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@168 -- # true 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:54.878 Cannot find device "nvmf_tgt_br2" 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@169 -- # true 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:54.878 Cannot find device "nvmf_br" 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@170 -- # true 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:54.878 Cannot find device "nvmf_init_if" 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@171 -- # true 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:54.878 Cannot find device "nvmf_init_if2" 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@172 -- # true 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:54.878 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@173 -- # true 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:54.878 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@174 -- # true 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:54.878 17:05:13 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:54.879 17:05:13 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:54.879 17:05:13 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:54.879 17:05:13 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:54.879 17:05:13 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:54.879 17:05:13 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:54.879 17:05:13 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:54.879 17:05:13 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:54.879 17:05:13 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:54.879 17:05:13 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:54.879 17:05:13 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:54.879 17:05:13 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:54.879 17:05:13 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:54.879 17:05:13 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:54.879 17:05:13 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:54.879 17:05:13 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:55.137 17:05:13 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:55.137 17:05:13 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:55.137 17:05:13 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:55.137 17:05:13 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:55.137 17:05:13 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:55.137 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:55.137 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:19:55.137 00:19:55.137 --- 10.0.0.3 ping statistics --- 00:19:55.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.137 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:19:55.137 17:05:13 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:55.137 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:55.137 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:19:55.137 00:19:55.137 --- 10.0.0.4 ping statistics --- 00:19:55.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.137 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:19:55.137 17:05:13 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:55.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:55.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:55.137 00:19:55.137 --- 10.0.0.1 ping statistics --- 00:19:55.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.137 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:55.137 17:05:13 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:55.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:55.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:19:55.137 00:19:55.137 --- 10.0.0.2 ping statistics --- 00:19:55.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.137 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:19:55.137 17:05:13 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:55.137 17:05:13 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:19:55.137 17:05:13 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:19:55.137 17:05:13 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:55.396 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:55.396 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:55.396 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:55.396 17:05:13 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:55.396 17:05:13 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:55.396 17:05:13 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:55.396 17:05:13 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:55.396 17:05:13 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:55.396 17:05:13 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:55.396 17:05:13 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:55.396 17:05:13 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:19:55.396 17:05:13 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:55.396 17:05:13 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:55.396 17:05:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:55.396 17:05:13 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=83050 00:19:55.396 17:05:13 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:55.396 17:05:13 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 83050 00:19:55.396 17:05:13 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 83050 ']' 00:19:55.396 17:05:13 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.396 17:05:13 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:55.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.396 17:05:13 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.396 17:05:13 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:55.396 17:05:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:55.396 [2024-11-20 17:05:13.920914] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:19:55.396 [2024-11-20 17:05:13.921027] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.697 [2024-11-20 17:05:14.072401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.697 [2024-11-20 17:05:14.137928] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.697 [2024-11-20 17:05:14.137987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.697 [2024-11-20 17:05:14.138013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.697 [2024-11-20 17:05:14.138024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.697 [2024-11-20 17:05:14.138033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.697 [2024-11-20 17:05:14.138473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.697 [2024-11-20 17:05:14.199340] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:55.955 17:05:14 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.955 17:05:14 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:19:55.955 17:05:14 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:55.955 17:05:14 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:55.955 17:05:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:55.955 17:05:14 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.955 17:05:14 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:19:55.955 17:05:14 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:55.955 17:05:14 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.955 17:05:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:55.955 [2024-11-20 17:05:14.313252] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.955 17:05:14 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.956 17:05:14 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:55.956 17:05:14 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:55.956 17:05:14 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:55.956 17:05:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:55.956 ************************************ 00:19:55.956 START TEST fio_dif_1_default 00:19:55.956 ************************************ 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:55.956 bdev_null0 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:55.956 [2024-11-20 17:05:14.357424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:55.956 { 00:19:55.956 "params": { 00:19:55.956 "name": "Nvme$subsystem", 00:19:55.956 "trtype": "$TEST_TRANSPORT", 00:19:55.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.956 "adrfam": "ipv4", 00:19:55.956 "trsvcid": "$NVMF_PORT", 00:19:55.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.956 "hdgst": ${hdgst:-false}, 00:19:55.956 "ddgst": ${ddgst:-false} 00:19:55.956 }, 00:19:55.956 "method": "bdev_nvme_attach_controller" 00:19:55.956 } 00:19:55.956 EOF 00:19:55.956 )") 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:55.956 "params": { 00:19:55.956 "name": "Nvme0", 00:19:55.956 "trtype": "tcp", 00:19:55.956 "traddr": "10.0.0.3", 00:19:55.956 "adrfam": "ipv4", 00:19:55.956 "trsvcid": "4420", 00:19:55.956 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:55.956 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:55.956 "hdgst": false, 00:19:55.956 "ddgst": false 00:19:55.956 }, 00:19:55.956 "method": "bdev_nvme_attach_controller" 00:19:55.956 }' 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:55.956 17:05:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:56.214 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:56.214 fio-3.35 00:19:56.214 Starting 1 thread 00:20:08.412 00:20:08.412 filename0: (groupid=0, jobs=1): err= 0: pid=83109: Wed Nov 20 17:05:25 2024 00:20:08.412 read: IOPS=8717, BW=34.1MiB/s (35.7MB/s)(341MiB/10001msec) 00:20:08.412 slat (nsec): min=6425, max=74092, avg=8538.94, stdev=3737.54 00:20:08.412 clat (usec): min=348, max=2151, avg=433.77, stdev=38.40 00:20:08.412 lat (usec): min=354, max=2179, avg=442.31, stdev=39.14 00:20:08.412 clat percentiles (usec): 00:20:08.412 | 1.00th=[ 367], 5.00th=[ 379], 10.00th=[ 392], 20.00th=[ 404], 00:20:08.412 | 30.00th=[ 412], 40.00th=[ 420], 50.00th=[ 433], 60.00th=[ 441], 00:20:08.412 | 70.00th=[ 449], 80.00th=[ 465], 90.00th=[ 482], 95.00th=[ 498], 00:20:08.412 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 578], 99.95th=[ 594], 00:20:08.412 | 99.99th=[ 660] 00:20:08.412 bw ( KiB/s): min=33053, max=35776, per=100.00%, avg=34886.58, stdev=672.59, samples=19 00:20:08.412 iops : min= 8263, max= 8944, avg=8721.63, stdev=168.18, samples=19 00:20:08.412 lat (usec) : 500=95.52%, 750=4.47% 00:20:08.412 lat (msec) : 2=0.01%, 4=0.01% 00:20:08.412 cpu : usr=85.64%, sys=12.44%, ctx=15, majf=0, minf=9 00:20:08.412 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:08.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.412 issued rwts: total=87184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:08.412 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:08.412 00:20:08.412 Run status group 0 (all jobs): 00:20:08.412 READ: bw=34.1MiB/s (35.7MB/s), 34.1MiB/s-34.1MiB/s (35.7MB/s-35.7MB/s), io=341MiB (357MB), run=10001-10001msec 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.412 00:20:08.412 real 0m11.044s 00:20:08.412 user 0m9.247s 00:20:08.412 sys 0m1.513s 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:08.412 ************************************ 00:20:08.412 END TEST fio_dif_1_default 00:20:08.412 ************************************ 00:20:08.412 17:05:25 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:08.412 17:05:25 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:08.412 17:05:25 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:08.412 17:05:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:08.412 ************************************ 00:20:08.412 START TEST fio_dif_1_multi_subsystems 00:20:08.412 ************************************ 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:08.412 bdev_null0 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:08.412 [2024-11-20 17:05:25.454417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:08.412 bdev_null1 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.412 { 00:20:08.412 "params": { 00:20:08.412 "name": "Nvme$subsystem", 00:20:08.412 "trtype": "$TEST_TRANSPORT", 00:20:08.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.412 "adrfam": "ipv4", 00:20:08.412 "trsvcid": "$NVMF_PORT", 00:20:08.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.412 "hdgst": ${hdgst:-false}, 00:20:08.412 "ddgst": ${ddgst:-false} 00:20:08.412 }, 00:20:08.412 "method": "bdev_nvme_attach_controller" 00:20:08.412 } 00:20:08.412 EOF 00:20:08.412 )") 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:08.412 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.413 { 00:20:08.413 "params": { 00:20:08.413 "name": "Nvme$subsystem", 00:20:08.413 "trtype": "$TEST_TRANSPORT", 00:20:08.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.413 "adrfam": "ipv4", 00:20:08.413 "trsvcid": "$NVMF_PORT", 00:20:08.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.413 "hdgst": ${hdgst:-false}, 00:20:08.413 "ddgst": ${ddgst:-false} 00:20:08.413 }, 00:20:08.413 "method": "bdev_nvme_attach_controller" 00:20:08.413 } 00:20:08.413 EOF 00:20:08.413 )") 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:08.413 "params": { 00:20:08.413 "name": "Nvme0", 00:20:08.413 "trtype": "tcp", 00:20:08.413 "traddr": "10.0.0.3", 00:20:08.413 "adrfam": "ipv4", 00:20:08.413 "trsvcid": "4420", 00:20:08.413 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:08.413 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:08.413 "hdgst": false, 00:20:08.413 "ddgst": false 00:20:08.413 }, 00:20:08.413 "method": "bdev_nvme_attach_controller" 00:20:08.413 },{ 00:20:08.413 "params": { 00:20:08.413 "name": "Nvme1", 00:20:08.413 "trtype": "tcp", 00:20:08.413 "traddr": "10.0.0.3", 00:20:08.413 "adrfam": "ipv4", 00:20:08.413 "trsvcid": "4420", 00:20:08.413 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.413 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:08.413 "hdgst": false, 00:20:08.413 "ddgst": false 00:20:08.413 }, 00:20:08.413 "method": "bdev_nvme_attach_controller" 00:20:08.413 }' 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:08.413 17:05:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:08.413 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:08.413 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:08.413 fio-3.35 00:20:08.413 Starting 2 threads 00:20:18.385 00:20:18.385 filename0: (groupid=0, jobs=1): err= 0: pid=83273: Wed Nov 20 17:05:36 2024 00:20:18.385 read: IOPS=4745, BW=18.5MiB/s (19.4MB/s)(185MiB/10001msec) 00:20:18.385 slat (usec): min=6, max=515, avg=13.33, stdev= 6.52 00:20:18.385 clat (usec): min=453, max=1457, avg=806.94, stdev=57.56 00:20:18.385 lat (usec): min=460, max=1468, avg=820.27, stdev=58.51 00:20:18.385 clat percentiles (usec): 00:20:18.385 | 1.00th=[ 676], 5.00th=[ 717], 10.00th=[ 734], 20.00th=[ 758], 00:20:18.385 | 30.00th=[ 775], 40.00th=[ 791], 50.00th=[ 807], 60.00th=[ 824], 00:20:18.385 | 70.00th=[ 832], 80.00th=[ 857], 90.00th=[ 881], 95.00th=[ 898], 00:20:18.385 | 99.00th=[ 947], 99.50th=[ 963], 99.90th=[ 1074], 99.95th=[ 1123], 00:20:18.385 | 99.99th=[ 1237] 00:20:18.385 bw ( KiB/s): min=18592, max=19392, per=50.06%, avg=19007.63, stdev=224.29, samples=19 00:20:18.385 iops : min= 4648, max= 4848, avg=4751.89, stdev=56.07, samples=19 00:20:18.385 lat (usec) : 500=0.02%, 750=15.30%, 1000=84.43% 00:20:18.385 lat (msec) : 2=0.25% 00:20:18.385 cpu : usr=89.12%, sys=9.08%, ctx=231, majf=0, minf=0 00:20:18.385 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:18.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.385 issued rwts: total=47456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.385 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:18.385 filename1: (groupid=0, jobs=1): err= 0: pid=83274: Wed Nov 20 17:05:36 2024 00:20:18.385 read: IOPS=4747, BW=18.5MiB/s (19.4MB/s)(185MiB/10001msec) 00:20:18.385 slat (usec): min=6, max=104, avg=13.56, stdev= 5.33 00:20:18.385 clat (usec): min=434, max=1491, avg=804.62, stdev=49.26 00:20:18.385 lat (usec): min=442, max=1565, avg=818.18, stdev=49.99 00:20:18.385 clat percentiles (usec): 00:20:18.385 | 1.00th=[ 701], 5.00th=[ 725], 10.00th=[ 742], 20.00th=[ 766], 00:20:18.385 | 30.00th=[ 775], 40.00th=[ 791], 50.00th=[ 807], 60.00th=[ 816], 00:20:18.385 | 70.00th=[ 832], 80.00th=[ 848], 90.00th=[ 865], 95.00th=[ 889], 00:20:18.385 | 99.00th=[ 922], 99.50th=[ 930], 99.90th=[ 963], 99.95th=[ 979], 00:20:18.385 | 99.99th=[ 1057] 00:20:18.385 bw ( KiB/s): min=18592, max=19392, per=50.09%, avg=19017.74, stdev=220.69, samples=19 00:20:18.385 iops : min= 4648, max= 4848, avg=4754.42, stdev=55.16, samples=19 00:20:18.385 lat (usec) : 500=0.03%, 750=13.49%, 1000=86.45% 00:20:18.385 lat (msec) : 2=0.02% 00:20:18.385 cpu : usr=89.83%, sys=8.76%, ctx=128, majf=0, minf=9 00:20:18.385 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:18.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.385 issued rwts: total=47476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.385 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:18.385 00:20:18.385 Run status group 0 (all jobs): 00:20:18.385 READ: bw=37.1MiB/s (38.9MB/s), 18.5MiB/s-18.5MiB/s (19.4MB/s-19.4MB/s), io=371MiB (389MB), run=10001-10001msec 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:18.385 ************************************ 00:20:18.385 END TEST fio_dif_1_multi_subsystems 00:20:18.385 ************************************ 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.385 00:20:18.385 real 0m11.189s 00:20:18.385 user 0m18.710s 00:20:18.385 sys 0m2.076s 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:18.385 17:05:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:18.385 17:05:36 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:18.385 17:05:36 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:18.385 17:05:36 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:18.385 17:05:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:18.385 ************************************ 00:20:18.385 START TEST fio_dif_rand_params 00:20:18.385 ************************************ 00:20:18.385 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:20:18.385 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:18.385 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:18.385 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:18.385 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:18.385 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:18.386 bdev_null0 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:18.386 [2024-11-20 17:05:36.698567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:18.386 { 00:20:18.386 "params": { 00:20:18.386 "name": "Nvme$subsystem", 00:20:18.386 "trtype": "$TEST_TRANSPORT", 00:20:18.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.386 "adrfam": "ipv4", 00:20:18.386 "trsvcid": "$NVMF_PORT", 00:20:18.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.386 "hdgst": ${hdgst:-false}, 00:20:18.386 "ddgst": ${ddgst:-false} 00:20:18.386 }, 00:20:18.386 "method": "bdev_nvme_attach_controller" 00:20:18.386 } 00:20:18.386 EOF 00:20:18.386 )") 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:18.386 "params": { 00:20:18.386 "name": "Nvme0", 00:20:18.386 "trtype": "tcp", 00:20:18.386 "traddr": "10.0.0.3", 00:20:18.386 "adrfam": "ipv4", 00:20:18.386 "trsvcid": "4420", 00:20:18.386 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:18.386 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:18.386 "hdgst": false, 00:20:18.386 "ddgst": false 00:20:18.386 }, 00:20:18.386 "method": "bdev_nvme_attach_controller" 00:20:18.386 }' 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:18.386 17:05:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:18.648 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:18.648 ... 00:20:18.648 fio-3.35 00:20:18.648 Starting 3 threads 00:20:25.216 00:20:25.216 filename0: (groupid=0, jobs=1): err= 0: pid=83431: Wed Nov 20 17:05:42 2024 00:20:25.216 read: IOPS=250, BW=31.4MiB/s (32.9MB/s)(157MiB/5009msec) 00:20:25.216 slat (nsec): min=7112, max=53459, avg=15562.60, stdev=5373.27 00:20:25.216 clat (usec): min=10928, max=19418, avg=11920.03, stdev=824.96 00:20:25.216 lat (usec): min=10940, max=19433, avg=11935.59, stdev=825.11 00:20:25.216 clat percentiles (usec): 00:20:25.216 | 1.00th=[11076], 5.00th=[11207], 10.00th=[11338], 20.00th=[11469], 00:20:25.216 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:20:25.216 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12518], 95.00th=[12649], 00:20:25.216 | 99.00th=[15926], 99.50th=[17171], 99.90th=[19530], 99.95th=[19530], 00:20:25.216 | 99.99th=[19530] 00:20:25.216 bw ( KiB/s): min=29184, max=33024, per=33.32%, avg=32102.40, stdev=1295.27, samples=10 00:20:25.216 iops : min= 228, max= 258, avg=250.80, stdev=10.12, samples=10 00:20:25.216 lat (msec) : 20=100.00% 00:20:25.216 cpu : usr=90.87%, sys=8.61%, ctx=10, majf=0, minf=0 00:20:25.216 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:25.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.216 issued rwts: total=1257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.216 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:25.216 filename0: (groupid=0, jobs=1): err= 0: pid=83432: Wed Nov 20 17:05:42 2024 00:20:25.216 read: IOPS=251, BW=31.4MiB/s (32.9MB/s)(158MiB/5014msec) 00:20:25.216 slat (nsec): min=6925, max=97566, avg=14999.01, stdev=6595.30 00:20:25.216 clat (usec): min=8127, max=19416, avg=11903.75, stdev=855.68 00:20:25.216 lat (usec): min=8135, max=19435, avg=11918.74, stdev=855.97 00:20:25.216 clat percentiles (usec): 00:20:25.216 | 1.00th=[11076], 5.00th=[11207], 10.00th=[11338], 20.00th=[11469], 00:20:25.216 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:20:25.216 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12518], 95.00th=[12649], 00:20:25.216 | 99.00th=[16057], 99.50th=[17171], 99.90th=[19530], 99.95th=[19530], 00:20:25.216 | 99.99th=[19530] 00:20:25.216 bw ( KiB/s): min=29952, max=33024, per=33.39%, avg=32172.60, stdev=1107.55, samples=10 00:20:25.216 iops : min= 234, max= 258, avg=251.30, stdev= 8.62, samples=10 00:20:25.216 lat (msec) : 10=0.48%, 20=99.52% 00:20:25.216 cpu : usr=91.32%, sys=8.16%, ctx=13, majf=0, minf=0 00:20:25.216 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:25.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.216 issued rwts: total=1260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.216 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:25.216 filename0: (groupid=0, jobs=1): err= 0: pid=83433: Wed Nov 20 17:05:42 2024 00:20:25.216 read: IOPS=250, BW=31.4MiB/s (32.9MB/s)(157MiB/5009msec) 00:20:25.216 slat (nsec): min=7230, max=54064, avg=15846.48, stdev=5691.11 00:20:25.216 clat (usec): min=10925, max=19418, avg=11917.59, stdev=824.41 00:20:25.216 lat (usec): min=10938, max=19433, avg=11933.43, stdev=824.53 00:20:25.216 clat percentiles (usec): 00:20:25.216 | 1.00th=[11076], 5.00th=[11207], 10.00th=[11338], 20.00th=[11469], 00:20:25.216 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:20:25.216 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12518], 95.00th=[12649], 00:20:25.216 | 99.00th=[15926], 99.50th=[17171], 99.90th=[19530], 99.95th=[19530], 00:20:25.216 | 99.99th=[19530] 00:20:25.216 bw ( KiB/s): min=29184, max=33024, per=33.32%, avg=32102.40, stdev=1295.27, samples=10 00:20:25.216 iops : min= 228, max= 258, avg=250.80, stdev=10.12, samples=10 00:20:25.216 lat (msec) : 20=100.00% 00:20:25.216 cpu : usr=91.45%, sys=7.75%, ctx=63, majf=0, minf=0 00:20:25.216 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:25.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.216 issued rwts: total=1257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.216 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:25.216 00:20:25.216 Run status group 0 (all jobs): 00:20:25.216 READ: bw=94.1MiB/s (98.7MB/s), 31.4MiB/s-31.4MiB/s (32.9MB/s-32.9MB/s), io=472MiB (495MB), run=5009-5014msec 00:20:25.216 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:25.216 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:25.216 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:25.216 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:25.216 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:25.216 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:25.216 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.216 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.216 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.216 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:25.216 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.216 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.216 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.216 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:25.216 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:25.216 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:25.216 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:25.216 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:25.216 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:25.216 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:25.216 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:25.216 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:25.216 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.217 bdev_null0 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.217 [2024-11-20 17:05:42.784724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.217 bdev_null1 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.217 bdev_null2 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:25.217 { 00:20:25.217 "params": { 00:20:25.217 "name": "Nvme$subsystem", 00:20:25.217 "trtype": "$TEST_TRANSPORT", 00:20:25.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.217 "adrfam": "ipv4", 00:20:25.217 "trsvcid": "$NVMF_PORT", 00:20:25.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.217 "hdgst": ${hdgst:-false}, 00:20:25.217 "ddgst": ${ddgst:-false} 00:20:25.217 }, 00:20:25.217 "method": "bdev_nvme_attach_controller" 00:20:25.217 } 00:20:25.217 EOF 00:20:25.217 )") 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:25.217 { 00:20:25.217 "params": { 00:20:25.217 "name": "Nvme$subsystem", 00:20:25.217 "trtype": "$TEST_TRANSPORT", 00:20:25.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.217 "adrfam": "ipv4", 00:20:25.217 "trsvcid": "$NVMF_PORT", 00:20:25.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.217 "hdgst": ${hdgst:-false}, 00:20:25.217 "ddgst": ${ddgst:-false} 00:20:25.217 }, 00:20:25.217 "method": "bdev_nvme_attach_controller" 00:20:25.217 } 00:20:25.217 EOF 00:20:25.217 )") 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:25.217 17:05:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:25.217 { 00:20:25.217 "params": { 00:20:25.217 "name": "Nvme$subsystem", 00:20:25.217 "trtype": "$TEST_TRANSPORT", 00:20:25.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.217 "adrfam": "ipv4", 00:20:25.217 "trsvcid": "$NVMF_PORT", 00:20:25.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.217 "hdgst": ${hdgst:-false}, 00:20:25.217 "ddgst": ${ddgst:-false} 00:20:25.217 }, 00:20:25.217 "method": "bdev_nvme_attach_controller" 00:20:25.217 } 00:20:25.217 EOF 00:20:25.217 )") 00:20:25.218 17:05:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:25.218 17:05:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:25.218 17:05:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:25.218 17:05:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:25.218 "params": { 00:20:25.218 "name": "Nvme0", 00:20:25.218 "trtype": "tcp", 00:20:25.218 "traddr": "10.0.0.3", 00:20:25.218 "adrfam": "ipv4", 00:20:25.218 "trsvcid": "4420", 00:20:25.218 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:25.218 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:25.218 "hdgst": false, 00:20:25.218 "ddgst": false 00:20:25.218 }, 00:20:25.218 "method": "bdev_nvme_attach_controller" 00:20:25.218 },{ 00:20:25.218 "params": { 00:20:25.218 "name": "Nvme1", 00:20:25.218 "trtype": "tcp", 00:20:25.218 "traddr": "10.0.0.3", 00:20:25.218 "adrfam": "ipv4", 00:20:25.218 "trsvcid": "4420", 00:20:25.218 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.218 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:25.218 "hdgst": false, 00:20:25.218 "ddgst": false 00:20:25.218 }, 00:20:25.218 "method": "bdev_nvme_attach_controller" 00:20:25.218 },{ 00:20:25.218 "params": { 00:20:25.218 "name": "Nvme2", 00:20:25.218 "trtype": "tcp", 00:20:25.218 "traddr": "10.0.0.3", 00:20:25.218 "adrfam": "ipv4", 00:20:25.218 "trsvcid": "4420", 00:20:25.218 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:25.218 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:25.218 "hdgst": false, 00:20:25.218 "ddgst": false 00:20:25.218 }, 00:20:25.218 "method": "bdev_nvme_attach_controller" 00:20:25.218 }' 00:20:25.218 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:25.218 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:25.218 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:25.218 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:25.218 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:25.218 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:25.218 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:25.218 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:25.218 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:25.218 17:05:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:25.218 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:25.218 ... 00:20:25.218 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:25.218 ... 00:20:25.218 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:25.218 ... 00:20:25.218 fio-3.35 00:20:25.218 Starting 24 threads 00:20:37.421 00:20:37.421 filename0: (groupid=0, jobs=1): err= 0: pid=83528: Wed Nov 20 17:05:53 2024 00:20:37.421 read: IOPS=191, BW=767KiB/s (785kB/s)(7712KiB/10060msec) 00:20:37.421 slat (usec): min=7, max=8025, avg=23.35, stdev=258.02 00:20:37.421 clat (msec): min=9, max=238, avg=83.23, stdev=32.62 00:20:37.421 lat (msec): min=9, max=238, avg=83.26, stdev=32.62 00:20:37.421 clat percentiles (msec): 00:20:37.421 | 1.00th=[ 14], 5.00th=[ 26], 10.00th=[ 36], 20.00th=[ 58], 00:20:37.421 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 85], 00:20:37.421 | 70.00th=[ 107], 80.00th=[ 112], 90.00th=[ 121], 95.00th=[ 124], 00:20:37.421 | 99.00th=[ 146], 99.50th=[ 201], 99.90th=[ 239], 99.95th=[ 239], 00:20:37.421 | 99.99th=[ 239] 00:20:37.421 bw ( KiB/s): min= 512, max= 1924, per=4.26%, avg=764.20, stdev=298.20, samples=20 00:20:37.421 iops : min= 128, max= 481, avg=191.05, stdev=74.55, samples=20 00:20:37.421 lat (msec) : 10=0.83%, 20=2.54%, 50=14.06%, 100=48.34%, 250=34.23% 00:20:37.421 cpu : usr=31.94%, sys=1.40%, ctx=965, majf=0, minf=9 00:20:37.421 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=81.4%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:37.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.421 complete : 0=0.0%, 4=87.8%, 8=11.8%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.421 issued rwts: total=1928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.421 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.421 filename0: (groupid=0, jobs=1): err= 0: pid=83529: Wed Nov 20 17:05:53 2024 00:20:37.421 read: IOPS=195, BW=783KiB/s (802kB/s)(7848KiB/10023msec) 00:20:37.421 slat (usec): min=3, max=8022, avg=26.84, stdev=221.61 00:20:37.421 clat (msec): min=15, max=242, avg=81.58, stdev=29.79 00:20:37.421 lat (msec): min=15, max=242, avg=81.61, stdev=29.80 00:20:37.421 clat percentiles (msec): 00:20:37.421 | 1.00th=[ 20], 5.00th=[ 34], 10.00th=[ 46], 20.00th=[ 56], 00:20:37.421 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 81], 60.00th=[ 84], 00:20:37.421 | 70.00th=[ 95], 80.00th=[ 111], 90.00th=[ 120], 95.00th=[ 125], 00:20:37.421 | 99.00th=[ 140], 99.50th=[ 239], 99.90th=[ 243], 99.95th=[ 243], 00:20:37.421 | 99.99th=[ 243] 00:20:37.421 bw ( KiB/s): min= 512, max= 1544, per=4.35%, avg=780.50, stdev=227.56, samples=20 00:20:37.421 iops : min= 128, max= 386, avg=195.10, stdev=56.87, samples=20 00:20:37.421 lat (msec) : 20=1.12%, 50=13.56%, 100=57.08%, 250=28.24% 00:20:37.421 cpu : usr=38.44%, sys=1.89%, ctx=1179, majf=0, minf=9 00:20:37.421 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:37.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.421 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.421 issued rwts: total=1962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.421 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.421 filename0: (groupid=0, jobs=1): err= 0: pid=83530: Wed Nov 20 17:05:53 2024 00:20:37.421 read: IOPS=189, BW=759KiB/s (778kB/s)(7648KiB/10071msec) 00:20:37.421 slat (usec): min=6, max=8643, avg=32.55, stdev=269.10 00:20:37.421 clat (msec): min=9, max=239, avg=83.92, stdev=32.62 00:20:37.421 lat (msec): min=9, max=239, avg=83.95, stdev=32.62 00:20:37.421 clat percentiles (msec): 00:20:37.421 | 1.00th=[ 14], 5.00th=[ 24], 10.00th=[ 38], 20.00th=[ 60], 00:20:37.421 | 30.00th=[ 71], 40.00th=[ 77], 50.00th=[ 84], 60.00th=[ 90], 00:20:37.421 | 70.00th=[ 106], 80.00th=[ 115], 90.00th=[ 120], 95.00th=[ 126], 00:20:37.421 | 99.00th=[ 150], 99.50th=[ 194], 99.90th=[ 241], 99.95th=[ 241], 00:20:37.421 | 99.99th=[ 241] 00:20:37.421 bw ( KiB/s): min= 512, max= 1948, per=4.22%, avg=757.80, stdev=307.32, samples=20 00:20:37.421 iops : min= 128, max= 487, avg=189.45, stdev=76.83, samples=20 00:20:37.421 lat (msec) : 10=0.84%, 20=1.83%, 50=12.40%, 100=49.74%, 250=35.20% 00:20:37.421 cpu : usr=36.92%, sys=1.58%, ctx=1074, majf=0, minf=9 00:20:37.421 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=81.3%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:37.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.421 complete : 0=0.0%, 4=87.9%, 8=11.7%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.421 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.421 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.421 filename0: (groupid=0, jobs=1): err= 0: pid=83531: Wed Nov 20 17:05:53 2024 00:20:37.421 read: IOPS=185, BW=743KiB/s (761kB/s)(7464KiB/10045msec) 00:20:37.422 slat (usec): min=7, max=8027, avg=28.34, stdev=291.60 00:20:37.422 clat (msec): min=17, max=239, avg=85.87, stdev=30.67 00:20:37.422 lat (msec): min=18, max=239, avg=85.90, stdev=30.67 00:20:37.422 clat percentiles (msec): 00:20:37.422 | 1.00th=[ 22], 5.00th=[ 33], 10.00th=[ 48], 20.00th=[ 65], 00:20:37.422 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 84], 60.00th=[ 92], 00:20:37.422 | 70.00th=[ 107], 80.00th=[ 115], 90.00th=[ 121], 95.00th=[ 127], 00:20:37.422 | 99.00th=[ 146], 99.50th=[ 192], 99.90th=[ 241], 99.95th=[ 241], 00:20:37.422 | 99.99th=[ 241] 00:20:37.422 bw ( KiB/s): min= 512, max= 1529, per=4.13%, avg=742.05, stdev=225.01, samples=20 00:20:37.422 iops : min= 128, max= 382, avg=185.50, stdev=56.21, samples=20 00:20:37.422 lat (msec) : 20=0.75%, 50=13.02%, 100=51.93%, 250=34.30% 00:20:37.422 cpu : usr=36.53%, sys=1.55%, ctx=1075, majf=0, minf=9 00:20:37.422 IO depths : 1=0.1%, 2=1.1%, 4=4.6%, 8=78.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:37.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.422 complete : 0=0.0%, 4=88.5%, 8=10.5%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.422 issued rwts: total=1866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.422 filename0: (groupid=0, jobs=1): err= 0: pid=83532: Wed Nov 20 17:05:53 2024 00:20:37.422 read: IOPS=185, BW=742KiB/s (759kB/s)(7444KiB/10037msec) 00:20:37.422 slat (usec): min=3, max=7019, avg=22.50, stdev=187.39 00:20:37.422 clat (msec): min=29, max=244, avg=86.09, stdev=28.09 00:20:37.422 lat (msec): min=29, max=244, avg=86.12, stdev=28.10 00:20:37.422 clat percentiles (msec): 00:20:37.422 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 64], 00:20:37.422 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 82], 60.00th=[ 87], 00:20:37.422 | 70.00th=[ 106], 80.00th=[ 114], 90.00th=[ 120], 95.00th=[ 126], 00:20:37.422 | 99.00th=[ 144], 99.50th=[ 243], 99.90th=[ 245], 99.95th=[ 245], 00:20:37.422 | 99.99th=[ 245] 00:20:37.422 bw ( KiB/s): min= 464, max= 1133, per=4.12%, avg=740.25, stdev=168.61, samples=20 00:20:37.422 iops : min= 116, max= 283, avg=185.05, stdev=42.12, samples=20 00:20:37.422 lat (msec) : 50=10.16%, 100=57.23%, 250=32.62% 00:20:37.422 cpu : usr=40.05%, sys=1.64%, ctx=1261, majf=0, minf=9 00:20:37.422 IO depths : 1=0.2%, 2=1.1%, 4=3.6%, 8=80.0%, 16=15.2%, 32=0.0%, >=64=0.0% 00:20:37.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.422 complete : 0=0.0%, 4=87.7%, 8=11.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.422 issued rwts: total=1861,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.422 filename0: (groupid=0, jobs=1): err= 0: pid=83533: Wed Nov 20 17:05:53 2024 00:20:37.422 read: IOPS=195, BW=783KiB/s (802kB/s)(7836KiB/10008msec) 00:20:37.422 slat (usec): min=4, max=8042, avg=34.26, stdev=283.66 00:20:37.422 clat (msec): min=7, max=239, avg=81.58, stdev=29.72 00:20:37.422 lat (msec): min=7, max=239, avg=81.62, stdev=29.72 00:20:37.422 clat percentiles (msec): 00:20:37.422 | 1.00th=[ 26], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 56], 00:20:37.422 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 85], 00:20:37.422 | 70.00th=[ 96], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 125], 00:20:37.422 | 99.00th=[ 144], 99.50th=[ 236], 99.90th=[ 241], 99.95th=[ 241], 00:20:37.422 | 99.99th=[ 241] 00:20:37.422 bw ( KiB/s): min= 512, max= 1400, per=4.26%, avg=764.21, stdev=200.68, samples=19 00:20:37.422 iops : min= 128, max= 350, avg=191.05, stdev=50.17, samples=19 00:20:37.422 lat (msec) : 10=0.31%, 20=0.15%, 50=14.80%, 100=56.46%, 250=28.28% 00:20:37.422 cpu : usr=38.53%, sys=1.53%, ctx=1141, majf=0, minf=10 00:20:37.422 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=82.3%, 16=15.4%, 32=0.0%, >=64=0.0% 00:20:37.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.422 complete : 0=0.0%, 4=87.0%, 8=12.6%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.422 issued rwts: total=1959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.422 filename0: (groupid=0, jobs=1): err= 0: pid=83534: Wed Nov 20 17:05:53 2024 00:20:37.422 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10006msec) 00:20:37.422 slat (usec): min=3, max=8041, avg=33.65, stdev=272.27 00:20:37.422 clat (msec): min=7, max=243, avg=84.27, stdev=30.39 00:20:37.422 lat (msec): min=7, max=243, avg=84.30, stdev=30.39 00:20:37.422 clat percentiles (msec): 00:20:37.422 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 57], 00:20:37.422 | 30.00th=[ 71], 40.00th=[ 74], 50.00th=[ 82], 60.00th=[ 86], 00:20:37.422 | 70.00th=[ 106], 80.00th=[ 112], 90.00th=[ 121], 95.00th=[ 126], 00:20:37.422 | 99.00th=[ 144], 99.50th=[ 209], 99.90th=[ 245], 99.95th=[ 245], 00:20:37.422 | 99.99th=[ 245] 00:20:37.422 bw ( KiB/s): min= 512, max= 1280, per=4.11%, avg=737.68, stdev=189.46, samples=19 00:20:37.422 iops : min= 128, max= 320, avg=184.42, stdev=47.36, samples=19 00:20:37.422 lat (msec) : 10=0.37%, 20=0.16%, 50=12.66%, 100=52.43%, 250=34.39% 00:20:37.422 cpu : usr=37.94%, sys=1.80%, ctx=1122, majf=0, minf=9 00:20:37.422 IO depths : 1=0.1%, 2=1.4%, 4=5.4%, 8=78.2%, 16=15.0%, 32=0.0%, >=64=0.0% 00:20:37.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.422 complete : 0=0.0%, 4=88.2%, 8=10.6%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.422 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.422 filename0: (groupid=0, jobs=1): err= 0: pid=83535: Wed Nov 20 17:05:53 2024 00:20:37.422 read: IOPS=195, BW=781KiB/s (800kB/s)(7824KiB/10014msec) 00:20:37.422 slat (usec): min=4, max=8045, avg=29.33, stdev=314.21 00:20:37.422 clat (msec): min=17, max=243, avg=81.78, stdev=29.45 00:20:37.422 lat (msec): min=17, max=243, avg=81.81, stdev=29.46 00:20:37.422 clat percentiles (msec): 00:20:37.422 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 58], 00:20:37.422 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 85], 00:20:37.422 | 70.00th=[ 96], 80.00th=[ 109], 90.00th=[ 120], 95.00th=[ 122], 00:20:37.422 | 99.00th=[ 144], 99.50th=[ 243], 99.90th=[ 245], 99.95th=[ 245], 00:20:37.422 | 99.99th=[ 245] 00:20:37.422 bw ( KiB/s): min= 512, max= 1416, per=4.26%, avg=765.47, stdev=203.86, samples=19 00:20:37.422 iops : min= 128, max= 354, avg=191.37, stdev=50.96, samples=19 00:20:37.422 lat (msec) : 20=0.31%, 50=15.29%, 100=56.39%, 250=28.02% 00:20:37.422 cpu : usr=31.64%, sys=1.55%, ctx=919, majf=0, minf=9 00:20:37.422 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=83.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:37.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.422 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.422 issued rwts: total=1956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.422 filename1: (groupid=0, jobs=1): err= 0: pid=83536: Wed Nov 20 17:05:53 2024 00:20:37.422 read: IOPS=200, BW=803KiB/s (823kB/s)(8088KiB/10067msec) 00:20:37.422 slat (usec): min=5, max=10029, avg=22.53, stdev=285.34 00:20:37.422 clat (usec): min=1562, max=239967, avg=79400.87, stdev=37303.31 00:20:37.422 lat (usec): min=1568, max=239977, avg=79423.39, stdev=37306.99 00:20:37.422 clat percentiles (usec): 00:20:37.422 | 1.00th=[ 1631], 5.00th=[ 2573], 10.00th=[ 22414], 20.00th=[ 48497], 00:20:37.422 | 30.00th=[ 70779], 40.00th=[ 72877], 50.00th=[ 82314], 60.00th=[ 87557], 00:20:37.422 | 70.00th=[106431], 80.00th=[111674], 90.00th=[120062], 95.00th=[125305], 00:20:37.422 | 99.00th=[143655], 99.50th=[191890], 99.90th=[240124], 99.95th=[240124], 00:20:37.422 | 99.99th=[240124] 00:20:37.422 bw ( KiB/s): min= 512, max= 2788, per=4.47%, avg=803.80, stdev=481.90, samples=20 00:20:37.422 iops : min= 128, max= 697, avg=200.95, stdev=120.48, samples=20 00:20:37.422 lat (msec) : 2=4.65%, 4=1.68%, 10=0.69%, 20=2.62%, 50=11.87% 00:20:37.422 lat (msec) : 100=45.30%, 250=33.18% 00:20:37.422 cpu : usr=33.99%, sys=1.43%, ctx=1173, majf=0, minf=9 00:20:37.422 IO depths : 1=0.2%, 2=1.3%, 4=4.6%, 8=78.1%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:37.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.422 complete : 0=0.0%, 4=88.7%, 8=10.3%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.422 issued rwts: total=2022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.422 filename1: (groupid=0, jobs=1): err= 0: pid=83537: Wed Nov 20 17:05:53 2024 00:20:37.422 read: IOPS=184, BW=739KiB/s (756kB/s)(7416KiB/10041msec) 00:20:37.422 slat (usec): min=7, max=8037, avg=33.41, stdev=329.91 00:20:37.422 clat (msec): min=20, max=238, avg=86.45, stdev=29.49 00:20:37.422 lat (msec): min=20, max=238, avg=86.48, stdev=29.49 00:20:37.422 clat percentiles (msec): 00:20:37.422 | 1.00th=[ 25], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 63], 00:20:37.422 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 83], 60.00th=[ 91], 00:20:37.422 | 70.00th=[ 106], 80.00th=[ 114], 90.00th=[ 120], 95.00th=[ 127], 00:20:37.422 | 99.00th=[ 150], 99.50th=[ 201], 99.90th=[ 239], 99.95th=[ 239], 00:20:37.422 | 99.99th=[ 239] 00:20:37.422 bw ( KiB/s): min= 440, max= 1386, per=4.10%, avg=736.10, stdev=201.04, samples=20 00:20:37.422 iops : min= 110, max= 346, avg=184.00, stdev=50.17, samples=20 00:20:37.422 lat (msec) : 50=12.24%, 100=51.83%, 250=35.92% 00:20:37.422 cpu : usr=40.03%, sys=1.58%, ctx=1295, majf=0, minf=9 00:20:37.422 IO depths : 1=0.2%, 2=0.7%, 4=2.5%, 8=80.6%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:37.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.422 complete : 0=0.0%, 4=88.0%, 8=11.5%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.422 issued rwts: total=1854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.422 filename1: (groupid=0, jobs=1): err= 0: pid=83538: Wed Nov 20 17:05:53 2024 00:20:37.422 read: IOPS=184, BW=736KiB/s (754kB/s)(7392KiB/10043msec) 00:20:37.422 slat (usec): min=4, max=4052, avg=22.28, stdev=132.76 00:20:37.422 clat (msec): min=22, max=241, avg=86.70, stdev=31.74 00:20:37.422 lat (msec): min=22, max=241, avg=86.72, stdev=31.74 00:20:37.422 clat percentiles (msec): 00:20:37.422 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 45], 20.00th=[ 63], 00:20:37.422 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 84], 60.00th=[ 99], 00:20:37.422 | 70.00th=[ 108], 80.00th=[ 114], 90.00th=[ 121], 95.00th=[ 130], 00:20:37.422 | 99.00th=[ 157], 99.50th=[ 236], 99.90th=[ 241], 99.95th=[ 241], 00:20:37.422 | 99.99th=[ 241] 00:20:37.422 bw ( KiB/s): min= 460, max= 1648, per=4.09%, avg=735.30, stdev=256.90, samples=20 00:20:37.422 iops : min= 115, max= 412, avg=183.80, stdev=64.22, samples=20 00:20:37.422 lat (msec) : 50=12.18%, 100=48.97%, 250=38.85% 00:20:37.422 cpu : usr=40.82%, sys=1.87%, ctx=1303, majf=0, minf=9 00:20:37.422 IO depths : 1=0.1%, 2=1.6%, 4=6.4%, 8=76.4%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:37.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.422 complete : 0=0.0%, 4=89.1%, 8=9.5%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.422 issued rwts: total=1848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.422 filename1: (groupid=0, jobs=1): err= 0: pid=83539: Wed Nov 20 17:05:53 2024 00:20:37.422 read: IOPS=194, BW=778KiB/s (797kB/s)(7784KiB/10003msec) 00:20:37.422 slat (usec): min=4, max=8049, avg=35.88, stdev=363.16 00:20:37.422 clat (msec): min=3, max=238, avg=82.10, stdev=29.34 00:20:37.422 lat (msec): min=3, max=238, avg=82.13, stdev=29.35 00:20:37.422 clat percentiles (msec): 00:20:37.422 | 1.00th=[ 20], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 57], 00:20:37.422 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 83], 00:20:37.422 | 70.00th=[ 96], 80.00th=[ 111], 90.00th=[ 120], 95.00th=[ 124], 00:20:37.422 | 99.00th=[ 142], 99.50th=[ 234], 99.90th=[ 239], 99.95th=[ 239], 00:20:37.422 | 99.99th=[ 239] 00:20:37.422 bw ( KiB/s): min= 560, max= 1253, per=4.20%, avg=754.37, stdev=165.30, samples=19 00:20:37.422 iops : min= 140, max= 313, avg=188.58, stdev=41.28, samples=19 00:20:37.422 lat (msec) : 4=0.31%, 10=0.67%, 20=0.15%, 50=11.15%, 100=58.48% 00:20:37.422 lat (msec) : 250=29.24% 00:20:37.422 cpu : usr=36.37%, sys=1.68%, ctx=1093, majf=0, minf=9 00:20:37.422 IO depths : 1=0.1%, 2=0.3%, 4=0.9%, 8=83.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:37.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.422 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.422 issued rwts: total=1946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.422 filename1: (groupid=0, jobs=1): err= 0: pid=83540: Wed Nov 20 17:05:53 2024 00:20:37.422 read: IOPS=196, BW=786KiB/s (805kB/s)(7900KiB/10054msec) 00:20:37.422 slat (usec): min=5, max=1038, avg=15.58, stdev=24.16 00:20:37.422 clat (msec): min=9, max=240, avg=81.27, stdev=33.91 00:20:37.422 lat (msec): min=9, max=240, avg=81.28, stdev=33.91 00:20:37.422 clat percentiles (msec): 00:20:37.422 | 1.00th=[ 13], 5.00th=[ 19], 10.00th=[ 32], 20.00th=[ 54], 00:20:37.422 | 30.00th=[ 68], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 87], 00:20:37.422 | 70.00th=[ 106], 80.00th=[ 113], 90.00th=[ 121], 95.00th=[ 124], 00:20:37.423 | 99.00th=[ 150], 99.50th=[ 194], 99.90th=[ 241], 99.95th=[ 241], 00:20:37.423 | 99.99th=[ 241] 00:20:37.423 bw ( KiB/s): min= 488, max= 2175, per=4.36%, avg=783.15, stdev=356.22, samples=20 00:20:37.423 iops : min= 122, max= 543, avg=195.75, stdev=88.90, samples=20 00:20:37.423 lat (msec) : 10=0.81%, 20=4.20%, 50=12.05%, 100=49.06%, 250=33.87% 00:20:37.423 cpu : usr=43.94%, sys=1.73%, ctx=1481, majf=0, minf=9 00:20:37.423 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=82.1%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:37.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.423 complete : 0=0.0%, 4=87.5%, 8=12.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.423 issued rwts: total=1975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.423 filename1: (groupid=0, jobs=1): err= 0: pid=83541: Wed Nov 20 17:05:53 2024 00:20:37.423 read: IOPS=180, BW=721KiB/s (738kB/s)(7216KiB/10011msec) 00:20:37.423 slat (usec): min=3, max=8033, avg=34.56, stdev=377.04 00:20:37.423 clat (msec): min=18, max=238, avg=88.61, stdev=32.51 00:20:37.423 lat (msec): min=18, max=238, avg=88.65, stdev=32.50 00:20:37.423 clat percentiles (msec): 00:20:37.423 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 48], 20.00th=[ 61], 00:20:37.423 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 85], 60.00th=[ 104], 00:20:37.423 | 70.00th=[ 108], 80.00th=[ 120], 90.00th=[ 121], 95.00th=[ 132], 00:20:37.423 | 99.00th=[ 171], 99.50th=[ 239], 99.90th=[ 239], 99.95th=[ 239], 00:20:37.423 | 99.99th=[ 239] 00:20:37.423 bw ( KiB/s): min= 384, max= 1392, per=3.91%, avg=701.47, stdev=227.72, samples=19 00:20:37.423 iops : min= 96, max= 348, avg=175.37, stdev=56.93, samples=19 00:20:37.423 lat (msec) : 20=0.39%, 50=12.69%, 100=46.56%, 250=40.35% 00:20:37.423 cpu : usr=35.18%, sys=1.39%, ctx=1150, majf=0, minf=9 00:20:37.423 IO depths : 1=0.1%, 2=2.5%, 4=10.1%, 8=72.8%, 16=14.5%, 32=0.0%, >=64=0.0% 00:20:37.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.423 complete : 0=0.0%, 4=89.8%, 8=8.0%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.423 issued rwts: total=1804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.423 filename1: (groupid=0, jobs=1): err= 0: pid=83542: Wed Nov 20 17:05:53 2024 00:20:37.423 read: IOPS=190, BW=761KiB/s (779kB/s)(7612KiB/10004msec) 00:20:37.423 slat (usec): min=4, max=4048, avg=34.40, stdev=236.72 00:20:37.423 clat (usec): min=1637, max=240883, avg=83958.28, stdev=30856.28 00:20:37.423 lat (usec): min=1645, max=240903, avg=83992.67, stdev=30858.06 00:20:37.423 clat percentiles (msec): 00:20:37.423 | 1.00th=[ 4], 5.00th=[ 41], 10.00th=[ 49], 20.00th=[ 60], 00:20:37.423 | 30.00th=[ 70], 40.00th=[ 75], 50.00th=[ 81], 60.00th=[ 85], 00:20:37.423 | 70.00th=[ 105], 80.00th=[ 113], 90.00th=[ 121], 95.00th=[ 124], 00:20:37.423 | 99.00th=[ 144], 99.50th=[ 205], 99.90th=[ 241], 99.95th=[ 241], 00:20:37.423 | 99.99th=[ 241] 00:20:37.423 bw ( KiB/s): min= 528, max= 1024, per=4.04%, avg=725.47, stdev=148.24, samples=19 00:20:37.423 iops : min= 132, max= 256, avg=181.37, stdev=37.06, samples=19 00:20:37.423 lat (msec) : 2=0.32%, 4=1.21%, 10=0.84%, 20=0.16%, 50=9.35% 00:20:37.423 lat (msec) : 100=54.49%, 250=33.63% 00:20:37.423 cpu : usr=40.72%, sys=1.75%, ctx=1254, majf=0, minf=9 00:20:37.423 IO depths : 1=0.2%, 2=1.4%, 4=5.0%, 8=78.5%, 16=15.0%, 32=0.0%, >=64=0.0% 00:20:37.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.423 complete : 0=0.0%, 4=88.1%, 8=10.8%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.423 issued rwts: total=1903,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.423 filename1: (groupid=0, jobs=1): err= 0: pid=83543: Wed Nov 20 17:05:53 2024 00:20:37.423 read: IOPS=189, BW=759KiB/s (778kB/s)(7620KiB/10035msec) 00:20:37.423 slat (usec): min=4, max=8034, avg=41.36, stdev=449.32 00:20:37.423 clat (msec): min=12, max=236, avg=84.06, stdev=29.35 00:20:37.423 lat (msec): min=12, max=236, avg=84.10, stdev=29.34 00:20:37.423 clat percentiles (msec): 00:20:37.423 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 61], 00:20:37.423 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 85], 00:20:37.423 | 70.00th=[ 106], 80.00th=[ 110], 90.00th=[ 121], 95.00th=[ 125], 00:20:37.423 | 99.00th=[ 146], 99.50th=[ 199], 99.90th=[ 236], 99.95th=[ 236], 00:20:37.423 | 99.99th=[ 236] 00:20:37.423 bw ( KiB/s): min= 456, max= 1479, per=4.21%, avg=755.15, stdev=210.93, samples=20 00:20:37.423 iops : min= 114, max= 369, avg=188.75, stdev=52.60, samples=20 00:20:37.423 lat (msec) : 20=0.16%, 50=13.81%, 100=55.12%, 250=30.92% 00:20:37.423 cpu : usr=31.67%, sys=1.52%, ctx=926, majf=0, minf=9 00:20:37.423 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.4%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:37.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.423 complete : 0=0.0%, 4=87.4%, 8=12.3%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.423 issued rwts: total=1905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.423 filename2: (groupid=0, jobs=1): err= 0: pid=83544: Wed Nov 20 17:05:53 2024 00:20:37.423 read: IOPS=178, BW=713KiB/s (730kB/s)(7176KiB/10061msec) 00:20:37.423 slat (usec): min=4, max=8019, avg=21.96, stdev=211.48 00:20:37.423 clat (msec): min=9, max=238, avg=89.41, stdev=35.41 00:20:37.423 lat (msec): min=9, max=238, avg=89.43, stdev=35.41 00:20:37.423 clat percentiles (msec): 00:20:37.423 | 1.00th=[ 14], 5.00th=[ 24], 10.00th=[ 33], 20.00th=[ 70], 00:20:37.423 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 87], 60.00th=[ 107], 00:20:37.423 | 70.00th=[ 111], 80.00th=[ 120], 90.00th=[ 123], 95.00th=[ 132], 00:20:37.423 | 99.00th=[ 167], 99.50th=[ 239], 99.90th=[ 239], 99.95th=[ 239], 00:20:37.423 | 99.99th=[ 239] 00:20:37.423 bw ( KiB/s): min= 496, max= 1908, per=3.96%, avg=710.60, stdev=310.98, samples=20 00:20:37.423 iops : min= 124, max= 477, avg=177.65, stdev=77.75, samples=20 00:20:37.423 lat (msec) : 10=0.89%, 20=2.56%, 50=10.03%, 100=42.42%, 250=44.09% 00:20:37.423 cpu : usr=36.47%, sys=1.69%, ctx=981, majf=0, minf=9 00:20:37.423 IO depths : 1=0.1%, 2=2.9%, 4=11.5%, 8=70.5%, 16=15.0%, 32=0.0%, >=64=0.0% 00:20:37.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.423 complete : 0=0.0%, 4=90.8%, 8=6.6%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.423 issued rwts: total=1794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.423 filename2: (groupid=0, jobs=1): err= 0: pid=83545: Wed Nov 20 17:05:53 2024 00:20:37.423 read: IOPS=189, BW=757KiB/s (775kB/s)(7592KiB/10032msec) 00:20:37.423 slat (usec): min=4, max=7853, avg=37.97, stdev=333.44 00:20:37.423 clat (msec): min=20, max=236, avg=84.30, stdev=28.44 00:20:37.423 lat (msec): min=20, max=236, avg=84.34, stdev=28.44 00:20:37.423 clat percentiles (msec): 00:20:37.423 | 1.00th=[ 36], 5.00th=[ 41], 10.00th=[ 49], 20.00th=[ 59], 00:20:37.423 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 81], 60.00th=[ 85], 00:20:37.423 | 70.00th=[ 104], 80.00th=[ 112], 90.00th=[ 121], 95.00th=[ 127], 00:20:37.423 | 99.00th=[ 148], 99.50th=[ 201], 99.90th=[ 236], 99.95th=[ 236], 00:20:37.423 | 99.99th=[ 236] 00:20:37.423 bw ( KiB/s): min= 512, max= 1145, per=4.21%, avg=755.25, stdev=171.02, samples=20 00:20:37.423 iops : min= 128, max= 286, avg=188.80, stdev=42.73, samples=20 00:20:37.423 lat (msec) : 50=11.64%, 100=56.90%, 250=31.45% 00:20:37.423 cpu : usr=40.51%, sys=1.79%, ctx=1349, majf=0, minf=9 00:20:37.423 IO depths : 1=0.2%, 2=0.7%, 4=2.3%, 8=81.4%, 16=15.4%, 32=0.0%, >=64=0.0% 00:20:37.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.423 complete : 0=0.0%, 4=87.4%, 8=12.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.423 issued rwts: total=1898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.423 filename2: (groupid=0, jobs=1): err= 0: pid=83546: Wed Nov 20 17:05:53 2024 00:20:37.423 read: IOPS=186, BW=746KiB/s (764kB/s)(7472KiB/10017msec) 00:20:37.423 slat (nsec): min=3956, max=64862, avg=16461.57, stdev=8448.39 00:20:37.423 clat (msec): min=15, max=238, avg=85.68, stdev=30.22 00:20:37.423 lat (msec): min=15, max=238, avg=85.70, stdev=30.22 00:20:37.423 clat percentiles (msec): 00:20:37.423 | 1.00th=[ 29], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 58], 00:20:37.423 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 84], 60.00th=[ 91], 00:20:37.423 | 70.00th=[ 108], 80.00th=[ 114], 90.00th=[ 121], 95.00th=[ 125], 00:20:37.423 | 99.00th=[ 153], 99.50th=[ 230], 99.90th=[ 239], 99.95th=[ 239], 00:20:37.423 | 99.99th=[ 239] 00:20:37.423 bw ( KiB/s): min= 512, max= 1320, per=4.14%, avg=743.20, stdev=199.97, samples=20 00:20:37.423 iops : min= 128, max= 330, avg=185.80, stdev=49.99, samples=20 00:20:37.423 lat (msec) : 20=0.16%, 50=11.78%, 100=51.55%, 250=36.51% 00:20:37.423 cpu : usr=36.57%, sys=1.69%, ctx=1210, majf=0, minf=9 00:20:37.423 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.7%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:37.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.423 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.423 issued rwts: total=1868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.423 filename2: (groupid=0, jobs=1): err= 0: pid=83547: Wed Nov 20 17:05:53 2024 00:20:37.423 read: IOPS=189, BW=758KiB/s (776kB/s)(7596KiB/10023msec) 00:20:37.423 slat (usec): min=4, max=4039, avg=24.09, stdev=159.82 00:20:37.423 clat (msec): min=23, max=239, avg=84.26, stdev=30.89 00:20:37.423 lat (msec): min=23, max=239, avg=84.29, stdev=30.89 00:20:37.423 clat percentiles (msec): 00:20:37.423 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 48], 20.00th=[ 59], 00:20:37.423 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 82], 60.00th=[ 85], 00:20:37.423 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 121], 95.00th=[ 129], 00:20:37.423 | 99.00th=[ 155], 99.50th=[ 205], 99.90th=[ 241], 99.95th=[ 241], 00:20:37.423 | 99.99th=[ 241] 00:20:37.423 bw ( KiB/s): min= 512, max= 1410, per=4.21%, avg=755.70, stdev=216.10, samples=20 00:20:37.423 iops : min= 128, max= 352, avg=188.90, stdev=53.95, samples=20 00:20:37.423 lat (msec) : 50=14.74%, 100=51.71%, 250=33.54% 00:20:37.423 cpu : usr=38.34%, sys=1.86%, ctx=1053, majf=0, minf=9 00:20:37.423 IO depths : 1=0.1%, 2=1.4%, 4=5.6%, 8=77.9%, 16=15.0%, 32=0.0%, >=64=0.0% 00:20:37.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.423 complete : 0=0.0%, 4=88.3%, 8=10.5%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.423 issued rwts: total=1899,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.423 filename2: (groupid=0, jobs=1): err= 0: pid=83548: Wed Nov 20 17:05:53 2024 00:20:37.423 read: IOPS=171, BW=686KiB/s (703kB/s)(6892KiB/10041msec) 00:20:37.423 slat (usec): min=6, max=5022, avg=36.42, stdev=289.89 00:20:37.423 clat (msec): min=20, max=238, avg=92.90, stdev=32.29 00:20:37.423 lat (msec): min=20, max=238, avg=92.93, stdev=32.30 00:20:37.423 clat percentiles (msec): 00:20:37.423 | 1.00th=[ 26], 5.00th=[ 32], 10.00th=[ 48], 20.00th=[ 70], 00:20:37.423 | 30.00th=[ 77], 40.00th=[ 84], 50.00th=[ 97], 60.00th=[ 107], 00:20:37.423 | 70.00th=[ 112], 80.00th=[ 120], 90.00th=[ 125], 95.00th=[ 133], 00:20:37.423 | 99.00th=[ 159], 99.50th=[ 239], 99.90th=[ 239], 99.95th=[ 239], 00:20:37.423 | 99.99th=[ 239] 00:20:37.423 bw ( KiB/s): min= 382, max= 1520, per=3.82%, avg=685.00, stdev=235.82, samples=20 00:20:37.423 iops : min= 95, max= 380, avg=171.20, stdev=59.00, samples=20 00:20:37.423 lat (msec) : 50=10.50%, 100=40.74%, 250=48.75% 00:20:37.423 cpu : usr=43.18%, sys=2.01%, ctx=1232, majf=0, minf=9 00:20:37.423 IO depths : 1=0.1%, 2=3.4%, 4=13.4%, 8=68.8%, 16=14.4%, 32=0.0%, >=64=0.0% 00:20:37.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.423 complete : 0=0.0%, 4=91.1%, 8=5.9%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.423 issued rwts: total=1723,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.423 filename2: (groupid=0, jobs=1): err= 0: pid=83549: Wed Nov 20 17:05:53 2024 00:20:37.423 read: IOPS=172, BW=691KiB/s (707kB/s)(6952KiB/10065msec) 00:20:37.423 slat (usec): min=4, max=7979, avg=21.87, stdev=197.39 00:20:37.423 clat (msec): min=14, max=238, avg=92.39, stdev=34.86 00:20:37.423 lat (msec): min=14, max=238, avg=92.41, stdev=34.86 00:20:37.423 clat percentiles (msec): 00:20:37.423 | 1.00th=[ 16], 5.00th=[ 26], 10.00th=[ 33], 20.00th=[ 70], 00:20:37.423 | 30.00th=[ 77], 40.00th=[ 84], 50.00th=[ 97], 60.00th=[ 107], 00:20:37.423 | 70.00th=[ 114], 80.00th=[ 120], 90.00th=[ 128], 95.00th=[ 134], 00:20:37.423 | 99.00th=[ 163], 99.50th=[ 239], 99.90th=[ 239], 99.95th=[ 239], 00:20:37.423 | 99.99th=[ 239] 00:20:37.423 bw ( KiB/s): min= 496, max= 1792, per=3.83%, avg=688.80, stdev=284.15, samples=20 00:20:37.423 iops : min= 124, max= 448, avg=172.20, stdev=71.04, samples=20 00:20:37.423 lat (msec) : 20=4.32%, 50=7.48%, 100=39.59%, 250=48.62% 00:20:37.423 cpu : usr=44.49%, sys=1.78%, ctx=1968, majf=0, minf=9 00:20:37.423 IO depths : 1=0.1%, 2=4.3%, 4=17.1%, 8=64.8%, 16=13.8%, 32=0.0%, >=64=0.0% 00:20:37.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.423 complete : 0=0.0%, 4=92.1%, 8=4.1%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.423 issued rwts: total=1738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.423 filename2: (groupid=0, jobs=1): err= 0: pid=83550: Wed Nov 20 17:05:53 2024 00:20:37.423 read: IOPS=184, BW=739KiB/s (757kB/s)(7420KiB/10043msec) 00:20:37.423 slat (usec): min=4, max=7961, avg=33.10, stdev=319.20 00:20:37.423 clat (msec): min=18, max=239, avg=86.37, stdev=28.43 00:20:37.423 lat (msec): min=18, max=239, avg=86.41, stdev=28.44 00:20:37.423 clat percentiles (msec): 00:20:37.424 | 1.00th=[ 36], 5.00th=[ 41], 10.00th=[ 50], 20.00th=[ 64], 00:20:37.424 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 83], 60.00th=[ 90], 00:20:37.424 | 70.00th=[ 106], 80.00th=[ 113], 90.00th=[ 120], 95.00th=[ 127], 00:20:37.424 | 99.00th=[ 142], 99.50th=[ 205], 99.90th=[ 241], 99.95th=[ 241], 00:20:37.424 | 99.99th=[ 241] 00:20:37.424 bw ( KiB/s): min= 456, max= 1222, per=4.11%, avg=737.50, stdev=170.82, samples=20 00:20:37.424 iops : min= 114, max= 305, avg=184.35, stdev=42.63, samples=20 00:20:37.424 lat (msec) : 20=0.11%, 50=10.03%, 100=55.42%, 250=34.45% 00:20:37.424 cpu : usr=34.70%, sys=1.25%, ctx=1015, majf=0, minf=9 00:20:37.424 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.6%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:37.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.424 complete : 0=0.0%, 4=87.7%, 8=11.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.424 issued rwts: total=1855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.424 filename2: (groupid=0, jobs=1): err= 0: pid=83551: Wed Nov 20 17:05:53 2024 00:20:37.424 read: IOPS=181, BW=726KiB/s (743kB/s)(7276KiB/10024msec) 00:20:37.424 slat (usec): min=5, max=8038, avg=29.75, stdev=297.20 00:20:37.424 clat (msec): min=23, max=239, avg=87.99, stdev=32.86 00:20:37.424 lat (msec): min=23, max=239, avg=88.02, stdev=32.85 00:20:37.424 clat percentiles (msec): 00:20:37.424 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 61], 00:20:37.424 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 99], 00:20:37.424 | 70.00th=[ 108], 80.00th=[ 120], 90.00th=[ 124], 95.00th=[ 132], 00:20:37.424 | 99.00th=[ 161], 99.50th=[ 241], 99.90th=[ 241], 99.95th=[ 241], 00:20:37.424 | 99.99th=[ 241] 00:20:37.424 bw ( KiB/s): min= 400, max= 1408, per=4.02%, avg=721.20, stdev=227.13, samples=20 00:20:37.424 iops : min= 100, max= 352, avg=180.30, stdev=56.78, samples=20 00:20:37.424 lat (msec) : 50=14.29%, 100=47.28%, 250=38.43% 00:20:37.424 cpu : usr=33.94%, sys=1.53%, ctx=1164, majf=0, minf=9 00:20:37.424 IO depths : 1=0.1%, 2=2.1%, 4=8.3%, 8=74.7%, 16=14.9%, 32=0.0%, >=64=0.0% 00:20:37.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.424 complete : 0=0.0%, 4=89.4%, 8=8.8%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.424 issued rwts: total=1819,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.424 00:20:37.424 Run status group 0 (all jobs): 00:20:37.424 READ: bw=17.5MiB/s (18.4MB/s), 686KiB/s-803KiB/s (703kB/s-823kB/s), io=177MiB (185MB), run=10003-10071msec 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.424 bdev_null0 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.424 [2024-11-20 17:05:54.249165] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.424 bdev_null1 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.424 { 00:20:37.424 "params": { 00:20:37.424 "name": "Nvme$subsystem", 00:20:37.424 "trtype": "$TEST_TRANSPORT", 00:20:37.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.424 "adrfam": "ipv4", 00:20:37.424 "trsvcid": "$NVMF_PORT", 00:20:37.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.424 "hdgst": ${hdgst:-false}, 00:20:37.424 "ddgst": ${ddgst:-false} 00:20:37.424 }, 00:20:37.424 "method": "bdev_nvme_attach_controller" 00:20:37.424 } 00:20:37.424 EOF 00:20:37.424 )") 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:37.424 17:05:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.424 { 00:20:37.424 "params": { 00:20:37.424 "name": "Nvme$subsystem", 00:20:37.424 "trtype": "$TEST_TRANSPORT", 00:20:37.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.424 "adrfam": "ipv4", 00:20:37.424 "trsvcid": "$NVMF_PORT", 00:20:37.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.424 "hdgst": ${hdgst:-false}, 00:20:37.424 "ddgst": ${ddgst:-false} 00:20:37.424 }, 00:20:37.424 "method": "bdev_nvme_attach_controller" 00:20:37.425 } 00:20:37.425 EOF 00:20:37.425 )") 00:20:37.425 17:05:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:37.425 17:05:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:37.425 17:05:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:37.425 17:05:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:37.425 "params": { 00:20:37.425 "name": "Nvme0", 00:20:37.425 "trtype": "tcp", 00:20:37.425 "traddr": "10.0.0.3", 00:20:37.425 "adrfam": "ipv4", 00:20:37.425 "trsvcid": "4420", 00:20:37.425 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:37.425 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:37.425 "hdgst": false, 00:20:37.425 "ddgst": false 00:20:37.425 }, 00:20:37.425 "method": "bdev_nvme_attach_controller" 00:20:37.425 },{ 00:20:37.425 "params": { 00:20:37.425 "name": "Nvme1", 00:20:37.425 "trtype": "tcp", 00:20:37.425 "traddr": "10.0.0.3", 00:20:37.425 "adrfam": "ipv4", 00:20:37.425 "trsvcid": "4420", 00:20:37.425 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.425 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.425 "hdgst": false, 00:20:37.425 "ddgst": false 00:20:37.425 }, 00:20:37.425 "method": "bdev_nvme_attach_controller" 00:20:37.425 }' 00:20:37.425 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:37.425 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:37.425 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:37.425 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:37.425 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:37.425 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:37.425 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:37.425 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:37.425 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:37.425 17:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:37.425 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:37.425 ... 00:20:37.425 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:37.425 ... 00:20:37.425 fio-3.35 00:20:37.425 Starting 4 threads 00:20:41.616 00:20:41.616 filename0: (groupid=0, jobs=1): err= 0: pid=83694: Wed Nov 20 17:06:00 2024 00:20:41.616 read: IOPS=1956, BW=15.3MiB/s (16.0MB/s)(76.5MiB/5002msec) 00:20:41.616 slat (usec): min=4, max=141, avg=22.49, stdev=12.34 00:20:41.616 clat (usec): min=881, max=8025, avg=4018.64, stdev=1082.30 00:20:41.616 lat (usec): min=889, max=8073, avg=4041.13, stdev=1082.56 00:20:41.616 clat percentiles (usec): 00:20:41.616 | 1.00th=[ 1532], 5.00th=[ 2040], 10.00th=[ 2409], 20.00th=[ 2868], 00:20:41.616 | 30.00th=[ 3294], 40.00th=[ 4047], 50.00th=[ 4424], 60.00th=[ 4621], 00:20:41.616 | 70.00th=[ 4752], 80.00th=[ 4948], 90.00th=[ 5145], 95.00th=[ 5342], 00:20:41.616 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 6980], 99.95th=[ 7570], 00:20:41.616 | 99.99th=[ 8029] 00:20:41.616 bw ( KiB/s): min=13712, max=18000, per=24.99%, avg=15713.78, stdev=1423.34, samples=9 00:20:41.616 iops : min= 1714, max= 2250, avg=1964.22, stdev=177.92, samples=9 00:20:41.616 lat (usec) : 1000=0.09% 00:20:41.616 lat (msec) : 2=4.11%, 4=34.82%, 10=60.98% 00:20:41.616 cpu : usr=94.94%, sys=4.10%, ctx=96, majf=0, minf=10 00:20:41.616 IO depths : 1=1.0%, 2=9.2%, 4=58.7%, 8=31.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:41.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.616 complete : 0=0.0%, 4=96.5%, 8=3.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.616 issued rwts: total=9788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.616 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:41.616 filename0: (groupid=0, jobs=1): err= 0: pid=83695: Wed Nov 20 17:06:00 2024 00:20:41.616 read: IOPS=1990, BW=15.5MiB/s (16.3MB/s)(77.8MiB/5001msec) 00:20:41.616 slat (nsec): min=6831, max=89986, avg=18875.48, stdev=11333.23 00:20:41.616 clat (usec): min=568, max=7541, avg=3962.47, stdev=1216.94 00:20:41.616 lat (usec): min=579, max=7566, avg=3981.35, stdev=1217.71 00:20:41.616 clat percentiles (usec): 00:20:41.616 | 1.00th=[ 1156], 5.00th=[ 1418], 10.00th=[ 2245], 20.00th=[ 2704], 00:20:41.616 | 30.00th=[ 3326], 40.00th=[ 3916], 50.00th=[ 4359], 60.00th=[ 4555], 00:20:41.616 | 70.00th=[ 4752], 80.00th=[ 5014], 90.00th=[ 5276], 95.00th=[ 5407], 00:20:41.616 | 99.00th=[ 5932], 99.50th=[ 6325], 99.90th=[ 6783], 99.95th=[ 6915], 00:20:41.616 | 99.99th=[ 7570] 00:20:41.616 bw ( KiB/s): min=12544, max=18672, per=25.31%, avg=15916.44, stdev=2203.46, samples=9 00:20:41.616 iops : min= 1568, max= 2334, avg=1989.56, stdev=275.43, samples=9 00:20:41.616 lat (usec) : 750=0.03%, 1000=0.33% 00:20:41.616 lat (msec) : 2=6.38%, 4=35.24%, 10=58.02% 00:20:41.616 cpu : usr=94.60%, sys=4.56%, ctx=9, majf=0, minf=0 00:20:41.616 IO depths : 1=0.3%, 2=9.5%, 4=58.6%, 8=31.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:41.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.616 complete : 0=0.0%, 4=96.3%, 8=3.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.616 issued rwts: total=9953,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.616 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:41.616 filename1: (groupid=0, jobs=1): err= 0: pid=83696: Wed Nov 20 17:06:00 2024 00:20:41.616 read: IOPS=2011, BW=15.7MiB/s (16.5MB/s)(78.6MiB/5002msec) 00:20:41.616 slat (nsec): min=6648, max=97074, avg=19751.38, stdev=11670.17 00:20:41.616 clat (usec): min=552, max=7796, avg=3919.74, stdev=1069.06 00:20:41.616 lat (usec): min=587, max=7829, avg=3939.49, stdev=1070.06 00:20:41.616 clat percentiles (usec): 00:20:41.616 | 1.00th=[ 1549], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2737], 00:20:41.616 | 30.00th=[ 3064], 40.00th=[ 3884], 50.00th=[ 4359], 60.00th=[ 4490], 00:20:41.616 | 70.00th=[ 4686], 80.00th=[ 4883], 90.00th=[ 5080], 95.00th=[ 5276], 00:20:41.616 | 99.00th=[ 5669], 99.50th=[ 5866], 99.90th=[ 7046], 99.95th=[ 7308], 00:20:41.616 | 99.99th=[ 7701] 00:20:41.616 bw ( KiB/s): min=13920, max=18192, per=25.63%, avg=16117.33, stdev=1583.29, samples=9 00:20:41.616 iops : min= 1740, max= 2274, avg=2014.67, stdev=197.91, samples=9 00:20:41.616 lat (usec) : 750=0.01%, 1000=0.02% 00:20:41.616 lat (msec) : 2=2.22%, 4=39.93%, 10=57.82% 00:20:41.616 cpu : usr=93.12%, sys=5.84%, ctx=7, majf=0, minf=0 00:20:41.616 IO depths : 1=0.5%, 2=7.5%, 4=59.7%, 8=32.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:41.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.616 complete : 0=0.0%, 4=97.2%, 8=2.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.616 issued rwts: total=10062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.616 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:41.616 filename1: (groupid=0, jobs=1): err= 0: pid=83697: Wed Nov 20 17:06:00 2024 00:20:41.616 read: IOPS=1902, BW=14.9MiB/s (15.6MB/s)(74.3MiB/5002msec) 00:20:41.616 slat (usec): min=5, max=103, avg=22.58, stdev=12.16 00:20:41.616 clat (usec): min=1152, max=7593, avg=4131.72, stdev=1054.51 00:20:41.616 lat (usec): min=1165, max=7658, avg=4154.29, stdev=1054.59 00:20:41.616 clat percentiles (usec): 00:20:41.616 | 1.00th=[ 1844], 5.00th=[ 2114], 10.00th=[ 2540], 20.00th=[ 2900], 00:20:41.616 | 30.00th=[ 3687], 40.00th=[ 4293], 50.00th=[ 4490], 60.00th=[ 4686], 00:20:41.616 | 70.00th=[ 4817], 80.00th=[ 5014], 90.00th=[ 5211], 95.00th=[ 5407], 00:20:41.616 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 6325], 99.95th=[ 6980], 00:20:41.616 | 99.99th=[ 7570] 00:20:41.616 bw ( KiB/s): min=13360, max=18096, per=24.45%, avg=15372.22, stdev=1757.98, samples=9 00:20:41.616 iops : min= 1670, max= 2262, avg=1921.44, stdev=219.76, samples=9 00:20:41.616 lat (msec) : 2=3.15%, 4=31.21%, 10=65.63% 00:20:41.616 cpu : usr=94.64%, sys=4.50%, ctx=7, majf=0, minf=0 00:20:41.616 IO depths : 1=1.1%, 2=11.3%, 4=57.7%, 8=29.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:41.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.616 complete : 0=0.0%, 4=95.7%, 8=4.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.617 issued rwts: total=9515,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.617 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:41.617 00:20:41.617 Run status group 0 (all jobs): 00:20:41.617 READ: bw=61.4MiB/s (64.4MB/s), 14.9MiB/s-15.7MiB/s (15.6MB/s-16.5MB/s), io=307MiB (322MB), run=5001-5002msec 00:20:41.875 17:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:41.875 17:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:41.875 17:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:41.876 17:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:41.876 17:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:41.876 17:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:41.876 17:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.876 17:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.876 17:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.876 17:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:41.876 17:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.876 17:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.876 17:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.876 17:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:41.876 17:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:41.876 17:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:41.876 17:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:41.876 17:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.876 17:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.876 17:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.876 17:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:41.876 17:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.876 17:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.876 ************************************ 00:20:41.876 END TEST fio_dif_rand_params 00:20:41.876 ************************************ 00:20:41.876 17:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.876 00:20:41.876 real 0m23.737s 00:20:41.876 user 2m5.806s 00:20:41.876 sys 0m7.021s 00:20:41.876 17:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.876 17:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:42.135 17:06:00 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:42.135 17:06:00 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:42.135 17:06:00 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.135 17:06:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:42.135 ************************************ 00:20:42.135 START TEST fio_dif_digest 00:20:42.135 ************************************ 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:42.135 bdev_null0 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:42.135 [2024-11-20 17:06:00.508993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:20:42.135 17:06:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:42.136 { 00:20:42.136 "params": { 00:20:42.136 "name": "Nvme$subsystem", 00:20:42.136 "trtype": "$TEST_TRANSPORT", 00:20:42.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.136 "adrfam": "ipv4", 00:20:42.136 "trsvcid": "$NVMF_PORT", 00:20:42.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.136 "hdgst": ${hdgst:-false}, 00:20:42.136 "ddgst": ${ddgst:-false} 00:20:42.136 }, 00:20:42.136 "method": "bdev_nvme_attach_controller" 00:20:42.136 } 00:20:42.136 EOF 00:20:42.136 )") 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:42.136 "params": { 00:20:42.136 "name": "Nvme0", 00:20:42.136 "trtype": "tcp", 00:20:42.136 "traddr": "10.0.0.3", 00:20:42.136 "adrfam": "ipv4", 00:20:42.136 "trsvcid": "4420", 00:20:42.136 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:42.136 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:42.136 "hdgst": true, 00:20:42.136 "ddgst": true 00:20:42.136 }, 00:20:42.136 "method": "bdev_nvme_attach_controller" 00:20:42.136 }' 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:42.136 17:06:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:42.395 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:42.395 ... 00:20:42.395 fio-3.35 00:20:42.395 Starting 3 threads 00:20:54.649 00:20:54.649 filename0: (groupid=0, jobs=1): err= 0: pid=83803: Wed Nov 20 17:06:11 2024 00:20:54.649 read: IOPS=217, BW=27.2MiB/s (28.5MB/s)(272MiB/10007msec) 00:20:54.649 slat (nsec): min=6828, max=75818, avg=13980.16, stdev=8658.39 00:20:54.649 clat (usec): min=5559, max=15921, avg=13766.11, stdev=1092.15 00:20:54.649 lat (usec): min=5566, max=15942, avg=13780.09, stdev=1093.00 00:20:54.649 clat percentiles (usec): 00:20:54.649 | 1.00th=[11994], 5.00th=[12256], 10.00th=[12387], 20.00th=[12649], 00:20:54.649 | 30.00th=[12780], 40.00th=[13435], 50.00th=[13829], 60.00th=[14222], 00:20:54.649 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15139], 95.00th=[15270], 00:20:54.650 | 99.00th=[15533], 99.50th=[15664], 99.90th=[15926], 99.95th=[15926], 00:20:54.650 | 99.99th=[15926] 00:20:54.650 bw ( KiB/s): min=25344, max=30720, per=33.39%, avg=27841.89, stdev=1938.20, samples=19 00:20:54.650 iops : min= 198, max= 240, avg=217.47, stdev=15.19, samples=19 00:20:54.650 lat (msec) : 10=0.14%, 20=99.86% 00:20:54.650 cpu : usr=93.62%, sys=5.66%, ctx=16, majf=0, minf=0 00:20:54.650 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:54.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.650 issued rwts: total=2175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.650 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:54.650 filename0: (groupid=0, jobs=1): err= 0: pid=83804: Wed Nov 20 17:06:11 2024 00:20:54.650 read: IOPS=217, BW=27.1MiB/s (28.5MB/s)(272MiB/10003msec) 00:20:54.650 slat (nsec): min=4185, max=57890, avg=14904.59, stdev=8888.92 00:20:54.650 clat (usec): min=9821, max=17715, avg=13778.10, stdev=1070.30 00:20:54.650 lat (usec): min=9830, max=17743, avg=13793.00, stdev=1070.73 00:20:54.650 clat percentiles (usec): 00:20:54.650 | 1.00th=[11863], 5.00th=[12256], 10.00th=[12387], 20.00th=[12649], 00:20:54.650 | 30.00th=[12911], 40.00th=[13435], 50.00th=[13829], 60.00th=[14222], 00:20:54.650 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15139], 95.00th=[15270], 00:20:54.650 | 99.00th=[15533], 99.50th=[15664], 99.90th=[17695], 99.95th=[17695], 00:20:54.650 | 99.99th=[17695] 00:20:54.650 bw ( KiB/s): min=25344, max=31488, per=33.35%, avg=27809.68, stdev=1925.61, samples=19 00:20:54.650 iops : min= 198, max= 246, avg=217.26, stdev=15.04, samples=19 00:20:54.650 lat (msec) : 10=0.14%, 20=99.86% 00:20:54.650 cpu : usr=94.16%, sys=5.27%, ctx=133, majf=0, minf=0 00:20:54.650 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:54.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.650 issued rwts: total=2172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.650 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:54.650 filename0: (groupid=0, jobs=1): err= 0: pid=83805: Wed Nov 20 17:06:11 2024 00:20:54.650 read: IOPS=217, BW=27.1MiB/s (28.5MB/s)(272MiB/10003msec) 00:20:54.650 slat (nsec): min=6762, max=53380, avg=11548.82, stdev=5511.40 00:20:54.650 clat (usec): min=11567, max=17206, avg=13787.35, stdev=1061.43 00:20:54.650 lat (usec): min=11585, max=17233, avg=13798.90, stdev=1062.13 00:20:54.650 clat percentiles (usec): 00:20:54.650 | 1.00th=[11994], 5.00th=[12256], 10.00th=[12387], 20.00th=[12649], 00:20:54.650 | 30.00th=[12911], 40.00th=[13435], 50.00th=[13829], 60.00th=[14222], 00:20:54.650 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15139], 95.00th=[15270], 00:20:54.650 | 99.00th=[15533], 99.50th=[15664], 99.90th=[17171], 99.95th=[17171], 00:20:54.650 | 99.99th=[17171] 00:20:54.650 bw ( KiB/s): min=25344, max=31488, per=33.35%, avg=27812.63, stdev=1943.56, samples=19 00:20:54.650 iops : min= 198, max= 246, avg=217.26, stdev=15.18, samples=19 00:20:54.650 lat (msec) : 20=100.00% 00:20:54.650 cpu : usr=94.79%, sys=4.65%, ctx=14, majf=0, minf=0 00:20:54.650 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:54.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.650 issued rwts: total=2172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.650 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:54.650 00:20:54.650 Run status group 0 (all jobs): 00:20:54.650 READ: bw=81.4MiB/s (85.4MB/s), 27.1MiB/s-27.2MiB/s (28.5MB/s-28.5MB/s), io=815MiB (854MB), run=10003-10007msec 00:20:54.650 17:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:54.650 17:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:20:54.650 17:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:20:54.650 17:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:54.650 17:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:20:54.650 17:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:54.650 17:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.650 17:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:54.650 17:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.650 17:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:54.650 17:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.650 17:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:54.650 ************************************ 00:20:54.650 END TEST fio_dif_digest 00:20:54.650 ************************************ 00:20:54.650 17:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.650 00:20:54.650 real 0m11.039s 00:20:54.650 user 0m28.931s 00:20:54.650 sys 0m1.841s 00:20:54.650 17:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:54.650 17:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:54.650 17:06:11 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:54.650 17:06:11 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:20:54.650 17:06:11 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:54.650 17:06:11 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:20:54.650 17:06:11 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:54.650 17:06:11 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:20:54.650 17:06:11 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:54.650 17:06:11 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:54.650 rmmod nvme_tcp 00:20:54.650 rmmod nvme_fabrics 00:20:54.650 rmmod nvme_keyring 00:20:54.650 17:06:11 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:54.650 17:06:11 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:20:54.650 17:06:11 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:20:54.650 17:06:11 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 83050 ']' 00:20:54.650 17:06:11 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 83050 00:20:54.650 17:06:11 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 83050 ']' 00:20:54.650 17:06:11 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 83050 00:20:54.650 17:06:11 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:20:54.650 17:06:11 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.650 17:06:11 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83050 00:20:54.650 killing process with pid 83050 00:20:54.650 17:06:11 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:54.650 17:06:11 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:54.650 17:06:11 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83050' 00:20:54.650 17:06:11 nvmf_dif -- common/autotest_common.sh@973 -- # kill 83050 00:20:54.650 17:06:11 nvmf_dif -- common/autotest_common.sh@978 -- # wait 83050 00:20:54.650 17:06:11 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:20:54.650 17:06:11 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:54.650 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:54.650 Waiting for block devices as requested 00:20:54.650 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:54.650 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:54.650 17:06:12 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:54.650 17:06:12 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:54.650 17:06:12 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:20:54.650 17:06:12 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:20:54.650 17:06:12 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:20:54.650 17:06:12 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:54.650 17:06:12 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:54.650 17:06:12 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:54.650 17:06:12 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:54.650 17:06:12 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:54.650 17:06:12 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:54.650 17:06:12 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:54.650 17:06:12 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:54.650 17:06:12 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:54.650 17:06:12 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:54.650 17:06:12 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:54.650 17:06:12 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:54.651 17:06:12 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:54.651 17:06:12 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:54.651 17:06:12 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:54.651 17:06:12 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:54.651 17:06:12 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:54.651 17:06:12 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.651 17:06:12 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:54.651 17:06:12 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.651 17:06:12 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:20:54.651 ************************************ 00:20:54.651 END TEST nvmf_dif 00:20:54.651 ************************************ 00:20:54.651 00:20:54.651 real 0m59.909s 00:20:54.651 user 3m50.196s 00:20:54.651 sys 0m17.619s 00:20:54.651 17:06:12 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:54.651 17:06:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:54.651 17:06:12 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:54.651 17:06:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:54.651 17:06:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:54.651 17:06:12 -- common/autotest_common.sh@10 -- # set +x 00:20:54.651 ************************************ 00:20:54.651 START TEST nvmf_abort_qd_sizes 00:20:54.651 ************************************ 00:20:54.651 17:06:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:54.651 * Looking for test storage... 00:20:54.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:54.651 17:06:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:54.651 17:06:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:20:54.651 17:06:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:54.651 17:06:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:54.651 17:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:54.651 17:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:54.651 17:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:54.651 17:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:20:54.651 17:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:20:54.651 17:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:20:54.651 17:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:20:54.651 17:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:20:54.651 17:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:20:54.651 17:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:20:54.651 17:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:54.651 17:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:20:54.651 17:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:20:54.651 17:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:54.651 17:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:54.651 17:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:20:54.651 17:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:20:54.651 17:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:54.651 17:06:12 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:54.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.651 --rc genhtml_branch_coverage=1 00:20:54.651 --rc genhtml_function_coverage=1 00:20:54.651 --rc genhtml_legend=1 00:20:54.651 --rc geninfo_all_blocks=1 00:20:54.651 --rc geninfo_unexecuted_blocks=1 00:20:54.651 00:20:54.651 ' 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:54.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.651 --rc genhtml_branch_coverage=1 00:20:54.651 --rc genhtml_function_coverage=1 00:20:54.651 --rc genhtml_legend=1 00:20:54.651 --rc geninfo_all_blocks=1 00:20:54.651 --rc geninfo_unexecuted_blocks=1 00:20:54.651 00:20:54.651 ' 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:54.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.651 --rc genhtml_branch_coverage=1 00:20:54.651 --rc genhtml_function_coverage=1 00:20:54.651 --rc genhtml_legend=1 00:20:54.651 --rc geninfo_all_blocks=1 00:20:54.651 --rc geninfo_unexecuted_blocks=1 00:20:54.651 00:20:54.651 ' 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:54.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.651 --rc genhtml_branch_coverage=1 00:20:54.651 --rc genhtml_function_coverage=1 00:20:54.651 --rc genhtml_legend=1 00:20:54.651 --rc geninfo_all_blocks=1 00:20:54.651 --rc geninfo_unexecuted_blocks=1 00:20:54.651 00:20:54.651 ' 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.651 17:06:13 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:54.652 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:54.652 Cannot find device "nvmf_init_br" 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:54.652 Cannot find device "nvmf_init_br2" 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:54.652 Cannot find device "nvmf_tgt_br" 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:54.652 Cannot find device "nvmf_tgt_br2" 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:54.652 Cannot find device "nvmf_init_br" 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:54.652 Cannot find device "nvmf_init_br2" 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:54.652 Cannot find device "nvmf_tgt_br" 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:54.652 Cannot find device "nvmf_tgt_br2" 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:54.652 Cannot find device "nvmf_br" 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:54.652 Cannot find device "nvmf_init_if" 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:54.652 Cannot find device "nvmf_init_if2" 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:54.652 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:20:54.652 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:54.911 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:54.911 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:54.912 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:54.912 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:54.912 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:54.912 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:54.912 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:54.912 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:54.912 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:54.912 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:54.912 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:54.912 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:54.912 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:20:54.912 00:20:54.912 --- 10.0.0.3 ping statistics --- 00:20:54.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.912 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:20:54.912 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:54.912 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:54.912 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:20:54.912 00:20:54.912 --- 10.0.0.4 ping statistics --- 00:20:54.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.912 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:20:54.912 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:54.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:54.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:20:54.912 00:20:54.912 --- 10.0.0.1 ping statistics --- 00:20:54.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.912 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:20:54.912 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:54.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:54.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:20:54.912 00:20:54.912 --- 10.0.0.2 ping statistics --- 00:20:54.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.912 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:54.912 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:54.912 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:20:54.912 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:20:54.912 17:06:13 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:55.848 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:55.848 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:55.848 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:55.848 17:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.848 17:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:55.848 17:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:55.848 17:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.848 17:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:55.848 17:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:55.848 17:06:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:20:55.848 17:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:55.848 17:06:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:55.848 17:06:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:55.848 17:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84455 00:20:55.848 17:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84455 00:20:55.848 17:06:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84455 ']' 00:20:55.848 17:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:20:55.848 17:06:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.849 17:06:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:55.849 17:06:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.849 17:06:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:55.849 17:06:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:55.849 [2024-11-20 17:06:14.369414] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:20:55.849 [2024-11-20 17:06:14.369506] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.107 [2024-11-20 17:06:14.526214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:56.107 [2024-11-20 17:06:14.591205] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.107 [2024-11-20 17:06:14.591514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.107 [2024-11-20 17:06:14.591735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.107 [2024-11-20 17:06:14.591914] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.107 [2024-11-20 17:06:14.591972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.107 [2024-11-20 17:06:14.593309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.107 [2024-11-20 17:06:14.593428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.107 [2024-11-20 17:06:14.593907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:56.107 [2024-11-20 17:06:14.593911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.366 [2024-11-20 17:06:14.653361] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:56.366 17:06:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.366 17:06:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:20:56.366 17:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:56.366 17:06:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:56.366 17:06:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:56.366 17:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.366 17:06:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:20:56.366 17:06:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:20:56.366 17:06:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:20:56.366 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:56.367 17:06:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:56.367 ************************************ 00:20:56.367 START TEST spdk_target_abort 00:20:56.367 ************************************ 00:20:56.367 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:20:56.367 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:20:56.367 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:20:56.367 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.367 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:56.367 spdk_targetn1 00:20:56.367 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.367 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:56.367 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.367 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:56.367 [2024-11-20 17:06:14.891810] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:56.626 [2024-11-20 17:06:14.933334] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:56.626 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:56.627 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:56.627 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:56.627 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:20:56.627 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:56.627 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:56.627 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:56.627 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:56.627 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:56.627 17:06:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:59.913 Initializing NVMe Controllers 00:20:59.913 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:20:59.913 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:59.913 Initialization complete. Launching workers. 00:20:59.913 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10431, failed: 0 00:20:59.913 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1036, failed to submit 9395 00:20:59.913 success 752, unsuccessful 284, failed 0 00:20:59.913 17:06:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:59.913 17:06:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:03.199 Initializing NVMe Controllers 00:21:03.199 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:03.199 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:03.199 Initialization complete. Launching workers. 00:21:03.199 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8928, failed: 0 00:21:03.199 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1140, failed to submit 7788 00:21:03.199 success 427, unsuccessful 713, failed 0 00:21:03.199 17:06:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:03.199 17:06:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:06.519 Initializing NVMe Controllers 00:21:06.519 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:06.519 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:06.519 Initialization complete. Launching workers. 00:21:06.519 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31274, failed: 0 00:21:06.519 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2336, failed to submit 28938 00:21:06.519 success 455, unsuccessful 1881, failed 0 00:21:06.519 17:06:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:06.519 17:06:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.519 17:06:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:06.519 17:06:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.519 17:06:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:06.519 17:06:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.519 17:06:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:07.086 17:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.086 17:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84455 00:21:07.086 17:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84455 ']' 00:21:07.086 17:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84455 00:21:07.086 17:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:21:07.086 17:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.086 17:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84455 00:21:07.086 killing process with pid 84455 00:21:07.086 17:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:07.086 17:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:07.086 17:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84455' 00:21:07.086 17:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84455 00:21:07.086 17:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84455 00:21:07.345 ************************************ 00:21:07.345 END TEST spdk_target_abort 00:21:07.345 ************************************ 00:21:07.345 00:21:07.345 real 0m10.805s 00:21:07.345 user 0m41.543s 00:21:07.345 sys 0m2.128s 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:07.346 17:06:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:07.346 17:06:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:07.346 17:06:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:07.346 17:06:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:07.346 ************************************ 00:21:07.346 START TEST kernel_target_abort 00:21:07.346 ************************************ 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:07.346 17:06:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:07.605 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:07.605 Waiting for block devices as requested 00:21:07.864 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:07.864 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:07.864 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:07.864 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:07.864 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:07.864 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:21:07.864 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:07.864 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:07.864 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:07.864 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:07.864 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:08.123 No valid GPT data, bailing 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:08.123 No valid GPT data, bailing 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:08.123 No valid GPT data, bailing 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:08.123 No valid GPT data, bailing 00:21:08.123 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:08.382 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:08.382 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:08.382 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:21:08.382 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:21:08.382 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:08.382 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:08.382 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:08.382 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:08.382 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:21:08.382 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:21:08.382 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:21:08.382 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:21:08.382 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:21:08.382 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:21:08.382 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:21:08.382 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:08.382 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 --hostid=52629f53-148b-4998-bf25-3a0243614ae8 -a 10.0.0.1 -t tcp -s 4420 00:21:08.382 00:21:08.382 Discovery Log Number of Records 2, Generation counter 2 00:21:08.382 =====Discovery Log Entry 0====== 00:21:08.382 trtype: tcp 00:21:08.382 adrfam: ipv4 00:21:08.382 subtype: current discovery subsystem 00:21:08.382 treq: not specified, sq flow control disable supported 00:21:08.382 portid: 1 00:21:08.382 trsvcid: 4420 00:21:08.383 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:08.383 traddr: 10.0.0.1 00:21:08.383 eflags: none 00:21:08.383 sectype: none 00:21:08.383 =====Discovery Log Entry 1====== 00:21:08.383 trtype: tcp 00:21:08.383 adrfam: ipv4 00:21:08.383 subtype: nvme subsystem 00:21:08.383 treq: not specified, sq flow control disable supported 00:21:08.383 portid: 1 00:21:08.383 trsvcid: 4420 00:21:08.383 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:08.383 traddr: 10.0.0.1 00:21:08.383 eflags: none 00:21:08.383 sectype: none 00:21:08.383 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:08.383 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:08.383 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:08.383 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:08.383 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:08.383 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:08.383 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:08.383 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:08.383 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:08.383 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:08.383 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:08.383 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:08.383 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:08.383 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:08.383 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:08.383 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:08.383 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:08.383 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:08.383 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:08.383 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:08.383 17:06:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:11.667 Initializing NVMe Controllers 00:21:11.667 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:11.667 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:11.667 Initialization complete. Launching workers. 00:21:11.667 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30441, failed: 0 00:21:11.667 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30441, failed to submit 0 00:21:11.667 success 0, unsuccessful 30441, failed 0 00:21:11.667 17:06:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:11.667 17:06:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:14.957 Initializing NVMe Controllers 00:21:14.957 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:14.957 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:14.957 Initialization complete. Launching workers. 00:21:14.957 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68476, failed: 0 00:21:14.957 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29896, failed to submit 38580 00:21:14.957 success 0, unsuccessful 29896, failed 0 00:21:14.957 17:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:14.957 17:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:18.244 Initializing NVMe Controllers 00:21:18.244 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:18.244 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:18.244 Initialization complete. Launching workers. 00:21:18.244 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 82623, failed: 0 00:21:18.244 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20632, failed to submit 61991 00:21:18.244 success 0, unsuccessful 20632, failed 0 00:21:18.244 17:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:18.244 17:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:18.244 17:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:21:18.244 17:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:18.244 17:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:18.244 17:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:18.244 17:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:18.244 17:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:21:18.244 17:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:21:18.244 17:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:18.503 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:20.407 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:20.407 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:20.407 ************************************ 00:21:20.407 END TEST kernel_target_abort 00:21:20.407 ************************************ 00:21:20.407 00:21:20.407 real 0m13.223s 00:21:20.407 user 0m6.372s 00:21:20.407 sys 0m4.315s 00:21:20.407 17:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:20.407 17:06:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:20.666 17:06:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:20.666 17:06:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:20.666 17:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:20.666 17:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:21:20.666 17:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:20.666 17:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:21:20.666 17:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:20.666 17:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:20.666 rmmod nvme_tcp 00:21:20.666 rmmod nvme_fabrics 00:21:20.666 rmmod nvme_keyring 00:21:20.666 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:20.666 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:21:20.666 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:21:20.666 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84455 ']' 00:21:20.666 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84455 00:21:20.666 17:06:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84455 ']' 00:21:20.667 17:06:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84455 00:21:20.667 Process with pid 84455 is not found 00:21:20.667 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84455) - No such process 00:21:20.667 17:06:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84455 is not found' 00:21:20.667 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:21:20.667 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:20.926 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:20.926 Waiting for block devices as requested 00:21:21.185 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:21.185 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:21.185 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:21.185 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:21.185 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:21:21.185 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:21:21.185 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:21.185 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:21:21.185 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:21.185 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:21.185 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:21.185 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:21.185 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:21.185 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:21.445 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:21.445 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:21.445 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:21.445 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:21.445 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:21.445 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:21.445 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:21.445 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:21.445 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:21.445 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:21.445 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.445 17:06:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:21.445 17:06:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.445 17:06:39 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:21:21.445 00:21:21.445 real 0m27.087s 00:21:21.445 user 0m49.036s 00:21:21.445 sys 0m7.959s 00:21:21.445 17:06:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:21.445 17:06:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:21.445 ************************************ 00:21:21.445 END TEST nvmf_abort_qd_sizes 00:21:21.445 ************************************ 00:21:21.445 17:06:39 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:21.445 17:06:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:21.445 17:06:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:21.445 17:06:39 -- common/autotest_common.sh@10 -- # set +x 00:21:21.445 ************************************ 00:21:21.445 START TEST keyring_file 00:21:21.445 ************************************ 00:21:21.445 17:06:39 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:21.705 * Looking for test storage... 00:21:21.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:21.705 17:06:40 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:21.705 17:06:40 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:21:21.705 17:06:40 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:21.705 17:06:40 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@345 -- # : 1 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@353 -- # local d=1 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@355 -- # echo 1 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@353 -- # local d=2 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@355 -- # echo 2 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@368 -- # return 0 00:21:21.705 17:06:40 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:21.705 17:06:40 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:21.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.705 --rc genhtml_branch_coverage=1 00:21:21.705 --rc genhtml_function_coverage=1 00:21:21.705 --rc genhtml_legend=1 00:21:21.705 --rc geninfo_all_blocks=1 00:21:21.705 --rc geninfo_unexecuted_blocks=1 00:21:21.705 00:21:21.705 ' 00:21:21.705 17:06:40 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:21.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.705 --rc genhtml_branch_coverage=1 00:21:21.705 --rc genhtml_function_coverage=1 00:21:21.705 --rc genhtml_legend=1 00:21:21.705 --rc geninfo_all_blocks=1 00:21:21.705 --rc geninfo_unexecuted_blocks=1 00:21:21.705 00:21:21.705 ' 00:21:21.705 17:06:40 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:21.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.705 --rc genhtml_branch_coverage=1 00:21:21.705 --rc genhtml_function_coverage=1 00:21:21.705 --rc genhtml_legend=1 00:21:21.705 --rc geninfo_all_blocks=1 00:21:21.705 --rc geninfo_unexecuted_blocks=1 00:21:21.705 00:21:21.705 ' 00:21:21.705 17:06:40 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:21.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.705 --rc genhtml_branch_coverage=1 00:21:21.705 --rc genhtml_function_coverage=1 00:21:21.705 --rc genhtml_legend=1 00:21:21.705 --rc geninfo_all_blocks=1 00:21:21.705 --rc geninfo_unexecuted_blocks=1 00:21:21.705 00:21:21.705 ' 00:21:21.705 17:06:40 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:21.705 17:06:40 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.705 17:06:40 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.705 17:06:40 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.705 17:06:40 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.705 17:06:40 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.705 17:06:40 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:21.705 17:06:40 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@51 -- # : 0 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:21.705 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:21.705 17:06:40 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:21.705 17:06:40 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:21.705 17:06:40 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:21.705 17:06:40 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:21.705 17:06:40 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:21.705 17:06:40 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:21.705 17:06:40 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:21.705 17:06:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:21.705 17:06:40 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:21.705 17:06:40 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:21.705 17:06:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:21.705 17:06:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:21.705 17:06:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.cGYeNmSyIH 00:21:21.705 17:06:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:21.705 17:06:40 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:21.706 17:06:40 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:21.706 17:06:40 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:21.706 17:06:40 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:21.965 17:06:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.cGYeNmSyIH 00:21:21.965 17:06:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.cGYeNmSyIH 00:21:21.965 17:06:40 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.cGYeNmSyIH 00:21:21.965 17:06:40 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:21.965 17:06:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:21.965 17:06:40 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:21.965 17:06:40 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:21.965 17:06:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:21.965 17:06:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:21.965 17:06:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.pfIlgRMNOh 00:21:21.965 17:06:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:21.965 17:06:40 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:21.965 17:06:40 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:21.965 17:06:40 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:21.965 17:06:40 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:21:21.965 17:06:40 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:21.965 17:06:40 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:21.965 17:06:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pfIlgRMNOh 00:21:21.965 17:06:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.pfIlgRMNOh 00:21:21.965 17:06:40 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.pfIlgRMNOh 00:21:21.965 17:06:40 keyring_file -- keyring/file.sh@30 -- # tgtpid=85365 00:21:21.965 17:06:40 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:21.965 17:06:40 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85365 00:21:21.965 17:06:40 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85365 ']' 00:21:21.965 17:06:40 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.965 17:06:40 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.965 17:06:40 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.965 17:06:40 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.965 17:06:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:21.965 [2024-11-20 17:06:40.391605] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:21:21.965 [2024-11-20 17:06:40.391718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85365 ] 00:21:22.224 [2024-11-20 17:06:40.542436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.224 [2024-11-20 17:06:40.607222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.224 [2024-11-20 17:06:40.680549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:21:22.484 17:06:40 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:22.484 [2024-11-20 17:06:40.894281] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.484 null0 00:21:22.484 [2024-11-20 17:06:40.926227] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:22.484 [2024-11-20 17:06:40.926447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.484 17:06:40 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:22.484 [2024-11-20 17:06:40.954252] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:22.484 request: 00:21:22.484 { 00:21:22.484 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:22.484 "secure_channel": false, 00:21:22.484 "listen_address": { 00:21:22.484 "trtype": "tcp", 00:21:22.484 "traddr": "127.0.0.1", 00:21:22.484 "trsvcid": "4420" 00:21:22.484 }, 00:21:22.484 "method": "nvmf_subsystem_add_listener", 00:21:22.484 "req_id": 1 00:21:22.484 } 00:21:22.484 Got JSON-RPC error response 00:21:22.484 response: 00:21:22.484 { 00:21:22.484 "code": -32602, 00:21:22.484 "message": "Invalid parameters" 00:21:22.484 } 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:22.484 17:06:40 keyring_file -- keyring/file.sh@47 -- # bperfpid=85375 00:21:22.484 17:06:40 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:22.484 17:06:40 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85375 /var/tmp/bperf.sock 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85375 ']' 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:22.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:22.484 17:06:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:22.743 [2024-11-20 17:06:41.024049] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:21:22.743 [2024-11-20 17:06:41.024160] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85375 ] 00:21:22.743 [2024-11-20 17:06:41.178802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.743 [2024-11-20 17:06:41.247168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.002 [2024-11-20 17:06:41.304214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:23.002 17:06:41 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.002 17:06:41 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:21:23.002 17:06:41 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cGYeNmSyIH 00:21:23.002 17:06:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cGYeNmSyIH 00:21:23.323 17:06:41 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.pfIlgRMNOh 00:21:23.323 17:06:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.pfIlgRMNOh 00:21:23.594 17:06:41 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:21:23.594 17:06:41 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:23.594 17:06:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:23.594 17:06:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:23.594 17:06:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:23.853 17:06:42 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.cGYeNmSyIH == \/\t\m\p\/\t\m\p\.\c\G\Y\e\N\m\S\y\I\H ]] 00:21:23.853 17:06:42 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:21:23.853 17:06:42 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:21:23.853 17:06:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:23.853 17:06:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:23.853 17:06:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:24.112 17:06:42 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.pfIlgRMNOh == \/\t\m\p\/\t\m\p\.\p\f\I\l\g\R\M\N\O\h ]] 00:21:24.112 17:06:42 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:21:24.112 17:06:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:24.112 17:06:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:24.112 17:06:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:24.112 17:06:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:24.112 17:06:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:24.370 17:06:42 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:24.370 17:06:42 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:21:24.370 17:06:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:24.370 17:06:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:24.370 17:06:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:24.370 17:06:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:24.370 17:06:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:24.629 17:06:43 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:21:24.629 17:06:43 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:24.629 17:06:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:24.887 [2024-11-20 17:06:43.254793] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:24.887 nvme0n1 00:21:24.887 17:06:43 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:21:24.887 17:06:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:24.887 17:06:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:24.887 17:06:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:24.888 17:06:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:24.888 17:06:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:25.146 17:06:43 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:21:25.146 17:06:43 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:21:25.146 17:06:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:25.146 17:06:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:25.146 17:06:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:25.146 17:06:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:25.146 17:06:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:25.713 17:06:43 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:21:25.713 17:06:43 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:25.713 Running I/O for 1 seconds... 00:21:26.650 11037.00 IOPS, 43.11 MiB/s 00:21:26.650 Latency(us) 00:21:26.650 [2024-11-20T17:06:45.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.650 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:26.650 nvme0n1 : 1.01 11078.53 43.28 0.00 0.00 11512.73 5630.14 23473.80 00:21:26.650 [2024-11-20T17:06:45.186Z] =================================================================================================================== 00:21:26.650 [2024-11-20T17:06:45.186Z] Total : 11078.53 43.28 0.00 0.00 11512.73 5630.14 23473.80 00:21:26.650 { 00:21:26.650 "results": [ 00:21:26.650 { 00:21:26.650 "job": "nvme0n1", 00:21:26.650 "core_mask": "0x2", 00:21:26.650 "workload": "randrw", 00:21:26.650 "percentage": 50, 00:21:26.650 "status": "finished", 00:21:26.650 "queue_depth": 128, 00:21:26.650 "io_size": 4096, 00:21:26.650 "runtime": 1.007895, 00:21:26.650 "iops": 11078.534966439956, 00:21:26.650 "mibps": 43.27552721265608, 00:21:26.650 "io_failed": 0, 00:21:26.650 "io_timeout": 0, 00:21:26.650 "avg_latency_us": 11512.732639669126, 00:21:26.650 "min_latency_us": 5630.138181818182, 00:21:26.650 "max_latency_us": 23473.803636363635 00:21:26.650 } 00:21:26.650 ], 00:21:26.650 "core_count": 1 00:21:26.650 } 00:21:26.650 17:06:45 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:26.650 17:06:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:26.915 17:06:45 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:21:26.915 17:06:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:26.915 17:06:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:26.915 17:06:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:26.915 17:06:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:26.915 17:06:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:27.481 17:06:45 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:27.482 17:06:45 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:21:27.482 17:06:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:27.482 17:06:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:27.482 17:06:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:27.482 17:06:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:27.482 17:06:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:27.482 17:06:45 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:21:27.482 17:06:45 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:27.482 17:06:45 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:27.482 17:06:45 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:27.482 17:06:45 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:27.482 17:06:45 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.482 17:06:45 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:27.482 17:06:45 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.482 17:06:45 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:27.482 17:06:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:28.049 [2024-11-20 17:06:46.311976] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:28.049 [2024-11-20 17:06:46.312565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x990c60 (107): Transport endpoint is not connected 00:21:28.049 [2024-11-20 17:06:46.313554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x990c60 (9): Bad file descriptor 00:21:28.049 [2024-11-20 17:06:46.314553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:21:28.049 [2024-11-20 17:06:46.314577] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:28.049 [2024-11-20 17:06:46.314590] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:28.049 [2024-11-20 17:06:46.314602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:21:28.049 request: 00:21:28.049 { 00:21:28.049 "name": "nvme0", 00:21:28.049 "trtype": "tcp", 00:21:28.049 "traddr": "127.0.0.1", 00:21:28.049 "adrfam": "ipv4", 00:21:28.049 "trsvcid": "4420", 00:21:28.049 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:28.049 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:28.049 "prchk_reftag": false, 00:21:28.049 "prchk_guard": false, 00:21:28.049 "hdgst": false, 00:21:28.049 "ddgst": false, 00:21:28.049 "psk": "key1", 00:21:28.049 "allow_unrecognized_csi": false, 00:21:28.049 "method": "bdev_nvme_attach_controller", 00:21:28.049 "req_id": 1 00:21:28.049 } 00:21:28.049 Got JSON-RPC error response 00:21:28.049 response: 00:21:28.049 { 00:21:28.049 "code": -5, 00:21:28.049 "message": "Input/output error" 00:21:28.049 } 00:21:28.049 17:06:46 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:28.049 17:06:46 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:28.049 17:06:46 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:28.049 17:06:46 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:28.049 17:06:46 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:21:28.049 17:06:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:28.049 17:06:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:28.049 17:06:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:28.049 17:06:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:28.049 17:06:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:28.308 17:06:46 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:28.308 17:06:46 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:21:28.308 17:06:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:28.308 17:06:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:28.308 17:06:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:28.308 17:06:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:28.308 17:06:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:28.567 17:06:46 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:21:28.567 17:06:46 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:21:28.567 17:06:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:28.826 17:06:47 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:21:28.826 17:06:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:29.394 17:06:47 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:21:29.394 17:06:47 keyring_file -- keyring/file.sh@78 -- # jq length 00:21:29.394 17:06:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:29.652 17:06:47 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:21:29.652 17:06:47 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.cGYeNmSyIH 00:21:29.652 17:06:47 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.cGYeNmSyIH 00:21:29.652 17:06:47 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:29.652 17:06:47 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.cGYeNmSyIH 00:21:29.652 17:06:47 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:29.652 17:06:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.652 17:06:47 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:29.652 17:06:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.652 17:06:47 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cGYeNmSyIH 00:21:29.652 17:06:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cGYeNmSyIH 00:21:29.911 [2024-11-20 17:06:48.210094] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.cGYeNmSyIH': 0100660 00:21:29.911 [2024-11-20 17:06:48.210142] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:29.911 request: 00:21:29.911 { 00:21:29.911 "name": "key0", 00:21:29.911 "path": "/tmp/tmp.cGYeNmSyIH", 00:21:29.911 "method": "keyring_file_add_key", 00:21:29.911 "req_id": 1 00:21:29.911 } 00:21:29.911 Got JSON-RPC error response 00:21:29.911 response: 00:21:29.911 { 00:21:29.911 "code": -1, 00:21:29.911 "message": "Operation not permitted" 00:21:29.911 } 00:21:29.911 17:06:48 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:29.911 17:06:48 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:29.911 17:06:48 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:29.911 17:06:48 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:29.911 17:06:48 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.cGYeNmSyIH 00:21:29.911 17:06:48 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cGYeNmSyIH 00:21:29.911 17:06:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cGYeNmSyIH 00:21:30.171 17:06:48 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.cGYeNmSyIH 00:21:30.171 17:06:48 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:21:30.171 17:06:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:30.171 17:06:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:30.171 17:06:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:30.171 17:06:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:30.171 17:06:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:30.430 17:06:48 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:21:30.430 17:06:48 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:30.430 17:06:48 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:30.430 17:06:48 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:30.430 17:06:48 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:30.430 17:06:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:30.430 17:06:48 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:30.430 17:06:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:30.430 17:06:48 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:30.430 17:06:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:30.690 [2024-11-20 17:06:49.134440] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.cGYeNmSyIH': No such file or directory 00:21:30.690 [2024-11-20 17:06:49.134485] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:30.690 [2024-11-20 17:06:49.134507] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:30.690 [2024-11-20 17:06:49.134517] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:21:30.690 [2024-11-20 17:06:49.134536] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:30.690 [2024-11-20 17:06:49.134545] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:30.690 request: 00:21:30.690 { 00:21:30.690 "name": "nvme0", 00:21:30.690 "trtype": "tcp", 00:21:30.690 "traddr": "127.0.0.1", 00:21:30.690 "adrfam": "ipv4", 00:21:30.690 "trsvcid": "4420", 00:21:30.690 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:30.690 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:30.690 "prchk_reftag": false, 00:21:30.690 "prchk_guard": false, 00:21:30.690 "hdgst": false, 00:21:30.690 "ddgst": false, 00:21:30.690 "psk": "key0", 00:21:30.690 "allow_unrecognized_csi": false, 00:21:30.690 "method": "bdev_nvme_attach_controller", 00:21:30.690 "req_id": 1 00:21:30.690 } 00:21:30.690 Got JSON-RPC error response 00:21:30.690 response: 00:21:30.690 { 00:21:30.690 "code": -19, 00:21:30.690 "message": "No such device" 00:21:30.690 } 00:21:30.690 17:06:49 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:30.690 17:06:49 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:30.690 17:06:49 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:30.690 17:06:49 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:30.690 17:06:49 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:21:30.690 17:06:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:30.949 17:06:49 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:30.949 17:06:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:30.949 17:06:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:30.949 17:06:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:30.949 17:06:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:30.949 17:06:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:30.949 17:06:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.hcIdjNdf7w 00:21:30.949 17:06:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:30.949 17:06:49 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:30.949 17:06:49 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:30.949 17:06:49 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:30.949 17:06:49 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:30.949 17:06:49 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:30.949 17:06:49 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:30.949 17:06:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.hcIdjNdf7w 00:21:30.949 17:06:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.hcIdjNdf7w 00:21:30.949 17:06:49 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.hcIdjNdf7w 00:21:30.949 17:06:49 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hcIdjNdf7w 00:21:30.949 17:06:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hcIdjNdf7w 00:21:31.584 17:06:49 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:31.584 17:06:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:31.584 nvme0n1 00:21:31.842 17:06:50 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:21:31.842 17:06:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:31.842 17:06:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:31.842 17:06:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:31.842 17:06:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:31.842 17:06:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:32.101 17:06:50 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:21:32.101 17:06:50 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:21:32.101 17:06:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:32.359 17:06:50 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:21:32.359 17:06:50 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:21:32.359 17:06:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:32.359 17:06:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:32.359 17:06:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:32.616 17:06:51 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:21:32.616 17:06:51 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:21:32.616 17:06:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:32.616 17:06:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:32.616 17:06:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:32.616 17:06:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:32.616 17:06:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:32.875 17:06:51 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:21:32.875 17:06:51 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:32.875 17:06:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:33.133 17:06:51 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:21:33.133 17:06:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:33.133 17:06:51 keyring_file -- keyring/file.sh@105 -- # jq length 00:21:33.699 17:06:51 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:21:33.699 17:06:51 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hcIdjNdf7w 00:21:33.699 17:06:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hcIdjNdf7w 00:21:33.699 17:06:52 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.pfIlgRMNOh 00:21:33.699 17:06:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.pfIlgRMNOh 00:21:33.957 17:06:52 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:33.957 17:06:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:34.525 nvme0n1 00:21:34.525 17:06:52 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:21:34.525 17:06:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:34.784 17:06:53 keyring_file -- keyring/file.sh@113 -- # config='{ 00:21:34.784 "subsystems": [ 00:21:34.784 { 00:21:34.784 "subsystem": "keyring", 00:21:34.784 "config": [ 00:21:34.784 { 00:21:34.785 "method": "keyring_file_add_key", 00:21:34.785 "params": { 00:21:34.785 "name": "key0", 00:21:34.785 "path": "/tmp/tmp.hcIdjNdf7w" 00:21:34.785 } 00:21:34.785 }, 00:21:34.785 { 00:21:34.785 "method": "keyring_file_add_key", 00:21:34.785 "params": { 00:21:34.785 "name": "key1", 00:21:34.785 "path": "/tmp/tmp.pfIlgRMNOh" 00:21:34.785 } 00:21:34.785 } 00:21:34.785 ] 00:21:34.785 }, 00:21:34.785 { 00:21:34.785 "subsystem": "iobuf", 00:21:34.785 "config": [ 00:21:34.785 { 00:21:34.785 "method": "iobuf_set_options", 00:21:34.785 "params": { 00:21:34.785 "small_pool_count": 8192, 00:21:34.785 "large_pool_count": 1024, 00:21:34.785 "small_bufsize": 8192, 00:21:34.785 "large_bufsize": 135168, 00:21:34.785 "enable_numa": false 00:21:34.785 } 00:21:34.785 } 00:21:34.785 ] 00:21:34.785 }, 00:21:34.785 { 00:21:34.785 "subsystem": "sock", 00:21:34.785 "config": [ 00:21:34.785 { 00:21:34.785 "method": "sock_set_default_impl", 00:21:34.785 "params": { 00:21:34.785 "impl_name": "uring" 00:21:34.785 } 00:21:34.785 }, 00:21:34.785 { 00:21:34.785 "method": "sock_impl_set_options", 00:21:34.785 "params": { 00:21:34.785 "impl_name": "ssl", 00:21:34.785 "recv_buf_size": 4096, 00:21:34.785 "send_buf_size": 4096, 00:21:34.785 "enable_recv_pipe": true, 00:21:34.785 "enable_quickack": false, 00:21:34.785 "enable_placement_id": 0, 00:21:34.785 "enable_zerocopy_send_server": true, 00:21:34.785 "enable_zerocopy_send_client": false, 00:21:34.785 "zerocopy_threshold": 0, 00:21:34.785 "tls_version": 0, 00:21:34.785 "enable_ktls": false 00:21:34.785 } 00:21:34.785 }, 00:21:34.785 { 00:21:34.785 "method": "sock_impl_set_options", 00:21:34.785 "params": { 00:21:34.785 "impl_name": "posix", 00:21:34.785 "recv_buf_size": 2097152, 00:21:34.785 "send_buf_size": 2097152, 00:21:34.785 "enable_recv_pipe": true, 00:21:34.785 "enable_quickack": false, 00:21:34.785 "enable_placement_id": 0, 00:21:34.785 "enable_zerocopy_send_server": true, 00:21:34.785 "enable_zerocopy_send_client": false, 00:21:34.785 "zerocopy_threshold": 0, 00:21:34.785 "tls_version": 0, 00:21:34.785 "enable_ktls": false 00:21:34.785 } 00:21:34.785 }, 00:21:34.785 { 00:21:34.785 "method": "sock_impl_set_options", 00:21:34.785 "params": { 00:21:34.785 "impl_name": "uring", 00:21:34.785 "recv_buf_size": 2097152, 00:21:34.785 "send_buf_size": 2097152, 00:21:34.785 "enable_recv_pipe": true, 00:21:34.785 "enable_quickack": false, 00:21:34.785 "enable_placement_id": 0, 00:21:34.785 "enable_zerocopy_send_server": false, 00:21:34.785 "enable_zerocopy_send_client": false, 00:21:34.785 "zerocopy_threshold": 0, 00:21:34.785 "tls_version": 0, 00:21:34.785 "enable_ktls": false 00:21:34.785 } 00:21:34.785 } 00:21:34.785 ] 00:21:34.785 }, 00:21:34.785 { 00:21:34.785 "subsystem": "vmd", 00:21:34.785 "config": [] 00:21:34.785 }, 00:21:34.785 { 00:21:34.785 "subsystem": "accel", 00:21:34.785 "config": [ 00:21:34.785 { 00:21:34.785 "method": "accel_set_options", 00:21:34.785 "params": { 00:21:34.785 "small_cache_size": 128, 00:21:34.785 "large_cache_size": 16, 00:21:34.785 "task_count": 2048, 00:21:34.785 "sequence_count": 2048, 00:21:34.785 "buf_count": 2048 00:21:34.785 } 00:21:34.785 } 00:21:34.785 ] 00:21:34.785 }, 00:21:34.785 { 00:21:34.785 "subsystem": "bdev", 00:21:34.785 "config": [ 00:21:34.785 { 00:21:34.785 "method": "bdev_set_options", 00:21:34.785 "params": { 00:21:34.785 "bdev_io_pool_size": 65535, 00:21:34.785 "bdev_io_cache_size": 256, 00:21:34.785 "bdev_auto_examine": true, 00:21:34.785 "iobuf_small_cache_size": 128, 00:21:34.785 "iobuf_large_cache_size": 16 00:21:34.785 } 00:21:34.785 }, 00:21:34.785 { 00:21:34.785 "method": "bdev_raid_set_options", 00:21:34.785 "params": { 00:21:34.785 "process_window_size_kb": 1024, 00:21:34.785 "process_max_bandwidth_mb_sec": 0 00:21:34.785 } 00:21:34.785 }, 00:21:34.785 { 00:21:34.785 "method": "bdev_iscsi_set_options", 00:21:34.785 "params": { 00:21:34.785 "timeout_sec": 30 00:21:34.785 } 00:21:34.785 }, 00:21:34.785 { 00:21:34.785 "method": "bdev_nvme_set_options", 00:21:34.785 "params": { 00:21:34.785 "action_on_timeout": "none", 00:21:34.785 "timeout_us": 0, 00:21:34.785 "timeout_admin_us": 0, 00:21:34.785 "keep_alive_timeout_ms": 10000, 00:21:34.785 "arbitration_burst": 0, 00:21:34.785 "low_priority_weight": 0, 00:21:34.785 "medium_priority_weight": 0, 00:21:34.785 "high_priority_weight": 0, 00:21:34.785 "nvme_adminq_poll_period_us": 10000, 00:21:34.785 "nvme_ioq_poll_period_us": 0, 00:21:34.785 "io_queue_requests": 512, 00:21:34.785 "delay_cmd_submit": true, 00:21:34.785 "transport_retry_count": 4, 00:21:34.785 "bdev_retry_count": 3, 00:21:34.785 "transport_ack_timeout": 0, 00:21:34.785 "ctrlr_loss_timeout_sec": 0, 00:21:34.785 "reconnect_delay_sec": 0, 00:21:34.785 "fast_io_fail_timeout_sec": 0, 00:21:34.785 "disable_auto_failback": false, 00:21:34.785 "generate_uuids": false, 00:21:34.785 "transport_tos": 0, 00:21:34.785 "nvme_error_stat": false, 00:21:34.785 "rdma_srq_size": 0, 00:21:34.785 "io_path_stat": false, 00:21:34.785 "allow_accel_sequence": false, 00:21:34.785 "rdma_max_cq_size": 0, 00:21:34.785 "rdma_cm_event_timeout_ms": 0, 00:21:34.785 "dhchap_digests": [ 00:21:34.785 "sha256", 00:21:34.785 "sha384", 00:21:34.785 "sha512" 00:21:34.785 ], 00:21:34.785 "dhchap_dhgroups": [ 00:21:34.785 "null", 00:21:34.785 "ffdhe2048", 00:21:34.785 "ffdhe3072", 00:21:34.785 "ffdhe4096", 00:21:34.785 "ffdhe6144", 00:21:34.785 "ffdhe8192" 00:21:34.785 ] 00:21:34.785 } 00:21:34.785 }, 00:21:34.785 { 00:21:34.785 "method": "bdev_nvme_attach_controller", 00:21:34.785 "params": { 00:21:34.785 "name": "nvme0", 00:21:34.785 "trtype": "TCP", 00:21:34.785 "adrfam": "IPv4", 00:21:34.785 "traddr": "127.0.0.1", 00:21:34.785 "trsvcid": "4420", 00:21:34.785 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:34.785 "prchk_reftag": false, 00:21:34.785 "prchk_guard": false, 00:21:34.785 "ctrlr_loss_timeout_sec": 0, 00:21:34.785 "reconnect_delay_sec": 0, 00:21:34.785 "fast_io_fail_timeout_sec": 0, 00:21:34.785 "psk": "key0", 00:21:34.785 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:34.785 "hdgst": false, 00:21:34.785 "ddgst": false, 00:21:34.785 "multipath": "multipath" 00:21:34.785 } 00:21:34.785 }, 00:21:34.785 { 00:21:34.785 "method": "bdev_nvme_set_hotplug", 00:21:34.785 "params": { 00:21:34.785 "period_us": 100000, 00:21:34.785 "enable": false 00:21:34.785 } 00:21:34.785 }, 00:21:34.785 { 00:21:34.785 "method": "bdev_wait_for_examine" 00:21:34.785 } 00:21:34.785 ] 00:21:34.785 }, 00:21:34.785 { 00:21:34.785 "subsystem": "nbd", 00:21:34.785 "config": [] 00:21:34.785 } 00:21:34.785 ] 00:21:34.785 }' 00:21:34.785 17:06:53 keyring_file -- keyring/file.sh@115 -- # killprocess 85375 00:21:34.785 17:06:53 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85375 ']' 00:21:34.785 17:06:53 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85375 00:21:34.785 17:06:53 keyring_file -- common/autotest_common.sh@959 -- # uname 00:21:34.785 17:06:53 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.785 17:06:53 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85375 00:21:34.785 killing process with pid 85375 00:21:34.785 Received shutdown signal, test time was about 1.000000 seconds 00:21:34.785 00:21:34.785 Latency(us) 00:21:34.785 [2024-11-20T17:06:53.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.786 [2024-11-20T17:06:53.322Z] =================================================================================================================== 00:21:34.786 [2024-11-20T17:06:53.322Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.786 17:06:53 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:34.786 17:06:53 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:34.786 17:06:53 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85375' 00:21:34.786 17:06:53 keyring_file -- common/autotest_common.sh@973 -- # kill 85375 00:21:34.786 17:06:53 keyring_file -- common/autotest_common.sh@978 -- # wait 85375 00:21:35.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:35.045 17:06:53 keyring_file -- keyring/file.sh@118 -- # bperfpid=85629 00:21:35.045 17:06:53 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:35.045 17:06:53 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85629 /var/tmp/bperf.sock 00:21:35.045 17:06:53 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85629 ']' 00:21:35.045 17:06:53 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:35.045 17:06:53 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.045 17:06:53 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:21:35.045 "subsystems": [ 00:21:35.045 { 00:21:35.045 "subsystem": "keyring", 00:21:35.045 "config": [ 00:21:35.045 { 00:21:35.045 "method": "keyring_file_add_key", 00:21:35.045 "params": { 00:21:35.045 "name": "key0", 00:21:35.045 "path": "/tmp/tmp.hcIdjNdf7w" 00:21:35.045 } 00:21:35.045 }, 00:21:35.045 { 00:21:35.045 "method": "keyring_file_add_key", 00:21:35.045 "params": { 00:21:35.045 "name": "key1", 00:21:35.045 "path": "/tmp/tmp.pfIlgRMNOh" 00:21:35.045 } 00:21:35.045 } 00:21:35.045 ] 00:21:35.045 }, 00:21:35.045 { 00:21:35.045 "subsystem": "iobuf", 00:21:35.045 "config": [ 00:21:35.045 { 00:21:35.045 "method": "iobuf_set_options", 00:21:35.045 "params": { 00:21:35.045 "small_pool_count": 8192, 00:21:35.045 "large_pool_count": 1024, 00:21:35.045 "small_bufsize": 8192, 00:21:35.045 "large_bufsize": 135168, 00:21:35.045 "enable_numa": false 00:21:35.045 } 00:21:35.045 } 00:21:35.045 ] 00:21:35.045 }, 00:21:35.045 { 00:21:35.045 "subsystem": "sock", 00:21:35.045 "config": [ 00:21:35.045 { 00:21:35.045 "method": "sock_set_default_impl", 00:21:35.045 "params": { 00:21:35.045 "impl_name": "uring" 00:21:35.045 } 00:21:35.045 }, 00:21:35.045 { 00:21:35.045 "method": "sock_impl_set_options", 00:21:35.045 "params": { 00:21:35.045 "impl_name": "ssl", 00:21:35.045 "recv_buf_size": 4096, 00:21:35.045 "send_buf_size": 4096, 00:21:35.045 "enable_recv_pipe": true, 00:21:35.045 "enable_quickack": false, 00:21:35.045 "enable_placement_id": 0, 00:21:35.045 "enable_zerocopy_send_server": true, 00:21:35.045 "enable_zerocopy_send_client": false, 00:21:35.045 "zerocopy_threshold": 0, 00:21:35.045 "tls_version": 0, 00:21:35.045 "enable_ktls": false 00:21:35.045 } 00:21:35.045 }, 00:21:35.045 { 00:21:35.045 "method": "sock_impl_set_options", 00:21:35.045 "params": { 00:21:35.045 "impl_name": "posix", 00:21:35.045 "recv_buf_size": 2097152, 00:21:35.045 "send_buf_size": 2097152, 00:21:35.045 "enable_recv_pipe": true, 00:21:35.045 "enable_quickack": false, 00:21:35.045 "enable_placement_id": 0, 00:21:35.045 "enable_zerocopy_send_server": true, 00:21:35.045 "enable_zerocopy_send_client": false, 00:21:35.045 "zerocopy_threshold": 0, 00:21:35.045 "tls_version": 0, 00:21:35.045 "enable_ktls": false 00:21:35.045 } 00:21:35.045 }, 00:21:35.045 { 00:21:35.045 "method": "sock_impl_set_options", 00:21:35.045 "params": { 00:21:35.045 "impl_name": "uring", 00:21:35.045 "recv_buf_size": 2097152, 00:21:35.045 "send_buf_size": 2097152, 00:21:35.045 "enable_recv_pipe": true, 00:21:35.045 "enable_quickack": false, 00:21:35.045 "enable_placement_id": 0, 00:21:35.045 "enable_zerocopy_send_server": false, 00:21:35.045 "enable_zerocopy_send_client": false, 00:21:35.045 "zerocopy_threshold": 0, 00:21:35.045 "tls_version": 0, 00:21:35.045 "enable_ktls": false 00:21:35.045 } 00:21:35.045 } 00:21:35.045 ] 00:21:35.045 }, 00:21:35.045 { 00:21:35.045 "subsystem": "vmd", 00:21:35.045 "config": [] 00:21:35.045 }, 00:21:35.045 { 00:21:35.045 "subsystem": "accel", 00:21:35.045 "config": [ 00:21:35.045 { 00:21:35.045 "method": "accel_set_options", 00:21:35.045 "params": { 00:21:35.045 "small_cache_size": 128, 00:21:35.045 "large_cache_size": 16, 00:21:35.045 "task_count": 2048, 00:21:35.045 "sequence_count": 2048, 00:21:35.045 "buf_count": 2048 00:21:35.045 } 00:21:35.045 } 00:21:35.045 ] 00:21:35.045 }, 00:21:35.045 { 00:21:35.045 "subsystem": "bdev", 00:21:35.045 "config": [ 00:21:35.045 { 00:21:35.045 "method": "bdev_set_options", 00:21:35.045 "params": { 00:21:35.045 "bdev_io_pool_size": 65535, 00:21:35.045 "bdev_io_cache_size": 256, 00:21:35.045 "bdev_auto_examine": true, 00:21:35.045 "iobuf_small_cache_size": 128, 00:21:35.045 "iobuf_large_cache_size": 16 00:21:35.045 } 00:21:35.045 }, 00:21:35.045 { 00:21:35.045 "method": "bdev_raid_set_options", 00:21:35.045 "params": { 00:21:35.045 "process_window_size_kb": 1024, 00:21:35.045 "process_max_bandwidth_mb_sec": 0 00:21:35.045 } 00:21:35.045 }, 00:21:35.045 { 00:21:35.045 "method": "bdev_iscsi_set_options", 00:21:35.045 "params": { 00:21:35.045 "timeout_sec": 30 00:21:35.045 } 00:21:35.045 }, 00:21:35.045 { 00:21:35.045 "method": "bdev_nvme_set_options", 00:21:35.045 "params": { 00:21:35.045 "action_on_timeout": "none", 00:21:35.045 "timeout_us": 0, 00:21:35.045 "timeout_admin_us": 0, 00:21:35.045 "keep_alive_timeout_ms": 10000, 00:21:35.045 "arbitration_burst": 0, 00:21:35.045 "low_priority_weight": 0, 00:21:35.045 "medium_priority_weight": 0, 00:21:35.045 "high_priority_weight": 0, 00:21:35.045 "nvme_adminq_poll_period_us": 10000, 00:21:35.045 "nvme_ioq_poll_period_us": 0, 00:21:35.045 "io_queue_requests": 512, 00:21:35.045 "delay_cmd_submit": true, 00:21:35.045 "transport_retry_count": 4, 00:21:35.045 "bdev_retry_count": 3, 00:21:35.045 "transport_ack_timeout": 0, 00:21:35.045 "ctrlr_loss_timeout_sec": 0, 00:21:35.045 "reconnect_delay_sec": 0, 00:21:35.045 "fast_io_fail_timeout_sec": 0, 00:21:35.045 "disable_auto_failback": false, 00:21:35.045 "generate_uuids": false, 00:21:35.045 "transport_tos": 0, 00:21:35.045 "nvme_error_stat": false, 00:21:35.045 "rdma_srq_size": 0, 00:21:35.045 "io_path_stat": false, 00:21:35.045 "allow_accel_sequence": false, 00:21:35.045 "rdma_max_cq_size": 0, 00:21:35.045 "rdma_cm_event_timeout_ms": 0, 00:21:35.045 "dhchap_digests": [ 00:21:35.045 "sha256", 00:21:35.045 "sha384", 00:21:35.045 "sha512" 00:21:35.045 ], 00:21:35.045 "dhchap_dhgroups": [ 00:21:35.045 "null", 00:21:35.045 "ffdhe2048", 00:21:35.045 "ffdhe3072", 00:21:35.045 "ffdhe4096", 00:21:35.045 "ffdhe6144", 00:21:35.045 "ffdhe8192" 00:21:35.045 ] 00:21:35.045 } 00:21:35.045 }, 00:21:35.045 { 00:21:35.045 "method": "bdev_nvme_attach_controller", 00:21:35.045 "params": { 00:21:35.045 "name": "nvme0", 00:21:35.045 "trtype": "TCP", 00:21:35.045 "adrfam": "IPv4", 00:21:35.045 "traddr": "127.0.0.1", 00:21:35.045 "trsvcid": "4420", 00:21:35.045 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:35.045 "prchk_reftag": false, 00:21:35.045 "prchk_guard": false, 00:21:35.045 "ctrlr_loss_timeout_sec": 0, 00:21:35.046 "reconnect_delay_sec": 0, 00:21:35.046 "fast_io_fail_timeout_sec": 0, 00:21:35.046 "psk": "key0", 00:21:35.046 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:35.046 "hdgst": false, 00:21:35.046 "ddgst": false, 00:21:35.046 "multipath": "multipath" 00:21:35.046 } 00:21:35.046 }, 00:21:35.046 { 00:21:35.046 "method": "bdev_nvme_set_hotplug", 00:21:35.046 "params": { 00:21:35.046 "period_us": 100000, 00:21:35.046 "enable": false 00:21:35.046 } 00:21:35.046 }, 00:21:35.046 { 00:21:35.046 "method": "bdev_wait_for_examine" 00:21:35.046 } 00:21:35.046 ] 00:21:35.046 }, 00:21:35.046 { 00:21:35.046 "subsystem": "nbd", 00:21:35.046 "config": [] 00:21:35.046 } 00:21:35.046 ] 00:21:35.046 }' 00:21:35.046 17:06:53 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:35.046 17:06:53 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.046 17:06:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:35.046 [2024-11-20 17:06:53.447713] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:21:35.046 [2024-11-20 17:06:53.447816] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85629 ] 00:21:35.303 [2024-11-20 17:06:53.592948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.303 [2024-11-20 17:06:53.649182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.303 [2024-11-20 17:06:53.789058] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:35.560 [2024-11-20 17:06:53.850339] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:36.127 17:06:54 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.127 17:06:54 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:21:36.127 17:06:54 keyring_file -- keyring/file.sh@121 -- # jq length 00:21:36.127 17:06:54 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:21:36.127 17:06:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:36.386 17:06:54 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:36.386 17:06:54 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:21:36.386 17:06:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:36.386 17:06:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:36.386 17:06:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:36.386 17:06:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:36.386 17:06:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:36.644 17:06:55 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:21:36.644 17:06:55 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:21:36.644 17:06:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:36.644 17:06:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:36.644 17:06:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:36.644 17:06:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:36.644 17:06:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:37.210 17:06:55 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:21:37.210 17:06:55 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:21:37.210 17:06:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:37.210 17:06:55 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:21:37.210 17:06:55 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:21:37.211 17:06:55 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:37.211 17:06:55 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.hcIdjNdf7w /tmp/tmp.pfIlgRMNOh 00:21:37.211 17:06:55 keyring_file -- keyring/file.sh@20 -- # killprocess 85629 00:21:37.211 17:06:55 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85629 ']' 00:21:37.211 17:06:55 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85629 00:21:37.211 17:06:55 keyring_file -- common/autotest_common.sh@959 -- # uname 00:21:37.211 17:06:55 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.211 17:06:55 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85629 00:21:37.211 killing process with pid 85629 00:21:37.211 Received shutdown signal, test time was about 1.000000 seconds 00:21:37.211 00:21:37.211 Latency(us) 00:21:37.211 [2024-11-20T17:06:55.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.211 [2024-11-20T17:06:55.747Z] =================================================================================================================== 00:21:37.211 [2024-11-20T17:06:55.747Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:37.211 17:06:55 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:37.211 17:06:55 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:37.211 17:06:55 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85629' 00:21:37.211 17:06:55 keyring_file -- common/autotest_common.sh@973 -- # kill 85629 00:21:37.211 17:06:55 keyring_file -- common/autotest_common.sh@978 -- # wait 85629 00:21:37.468 17:06:55 keyring_file -- keyring/file.sh@21 -- # killprocess 85365 00:21:37.468 17:06:55 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85365 ']' 00:21:37.468 17:06:55 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85365 00:21:37.468 17:06:55 keyring_file -- common/autotest_common.sh@959 -- # uname 00:21:37.468 17:06:55 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.468 17:06:55 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85365 00:21:37.468 killing process with pid 85365 00:21:37.468 17:06:55 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:37.468 17:06:55 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:37.468 17:06:55 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85365' 00:21:37.468 17:06:55 keyring_file -- common/autotest_common.sh@973 -- # kill 85365 00:21:37.468 17:06:55 keyring_file -- common/autotest_common.sh@978 -- # wait 85365 00:21:38.035 00:21:38.035 real 0m16.432s 00:21:38.035 user 0m41.966s 00:21:38.035 sys 0m3.128s 00:21:38.035 17:06:56 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.035 ************************************ 00:21:38.035 17:06:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:38.035 END TEST keyring_file 00:21:38.035 ************************************ 00:21:38.035 17:06:56 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:21:38.035 17:06:56 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:38.035 17:06:56 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:38.035 17:06:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:38.035 17:06:56 -- common/autotest_common.sh@10 -- # set +x 00:21:38.035 ************************************ 00:21:38.035 START TEST keyring_linux 00:21:38.035 ************************************ 00:21:38.035 17:06:56 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:38.035 Joined session keyring: 896703319 00:21:38.035 * Looking for test storage... 00:21:38.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:38.035 17:06:56 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:38.035 17:06:56 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:21:38.035 17:06:56 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:38.294 17:06:56 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:38.294 17:06:56 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:38.294 17:06:56 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:38.294 17:06:56 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:38.294 17:06:56 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:21:38.294 17:06:56 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:21:38.294 17:06:56 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:21:38.294 17:06:56 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:21:38.294 17:06:56 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:21:38.295 17:06:56 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:21:38.295 17:06:56 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:21:38.295 17:06:56 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:38.295 17:06:56 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:21:38.295 17:06:56 keyring_linux -- scripts/common.sh@345 -- # : 1 00:21:38.295 17:06:56 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:38.295 17:06:56 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:38.295 17:06:56 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:21:38.295 17:06:56 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:21:38.295 17:06:56 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:38.295 17:06:56 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:21:38.295 17:06:56 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:21:38.295 17:06:56 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:21:38.295 17:06:56 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:21:38.295 17:06:56 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:38.295 17:06:56 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:21:38.295 17:06:56 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:21:38.295 17:06:56 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:38.295 17:06:56 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:38.295 17:06:56 keyring_linux -- scripts/common.sh@368 -- # return 0 00:21:38.295 17:06:56 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:38.295 17:06:56 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:38.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.295 --rc genhtml_branch_coverage=1 00:21:38.295 --rc genhtml_function_coverage=1 00:21:38.295 --rc genhtml_legend=1 00:21:38.295 --rc geninfo_all_blocks=1 00:21:38.295 --rc geninfo_unexecuted_blocks=1 00:21:38.295 00:21:38.295 ' 00:21:38.295 17:06:56 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:38.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.295 --rc genhtml_branch_coverage=1 00:21:38.295 --rc genhtml_function_coverage=1 00:21:38.295 --rc genhtml_legend=1 00:21:38.295 --rc geninfo_all_blocks=1 00:21:38.295 --rc geninfo_unexecuted_blocks=1 00:21:38.295 00:21:38.295 ' 00:21:38.295 17:06:56 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:38.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.295 --rc genhtml_branch_coverage=1 00:21:38.295 --rc genhtml_function_coverage=1 00:21:38.295 --rc genhtml_legend=1 00:21:38.295 --rc geninfo_all_blocks=1 00:21:38.295 --rc geninfo_unexecuted_blocks=1 00:21:38.295 00:21:38.295 ' 00:21:38.295 17:06:56 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:38.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.295 --rc genhtml_branch_coverage=1 00:21:38.295 --rc genhtml_function_coverage=1 00:21:38.295 --rc genhtml_legend=1 00:21:38.295 --rc geninfo_all_blocks=1 00:21:38.295 --rc geninfo_unexecuted_blocks=1 00:21:38.295 00:21:38.295 ' 00:21:38.295 17:06:56 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:38.295 17:06:56 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52629f53-148b-4998-bf25-3a0243614ae8 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=52629f53-148b-4998-bf25-3a0243614ae8 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:38.295 17:06:56 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:21:38.295 17:06:56 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.295 17:06:56 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.295 17:06:56 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.295 17:06:56 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.295 17:06:56 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.295 17:06:56 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.295 17:06:56 keyring_linux -- paths/export.sh@5 -- # export PATH 00:21:38.295 17:06:56 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:38.295 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:38.295 17:06:56 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:38.295 17:06:56 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:38.295 17:06:56 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:38.295 17:06:56 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:21:38.295 17:06:56 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:21:38.295 17:06:56 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:21:38.295 17:06:56 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:21:38.295 17:06:56 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:38.295 17:06:56 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:21:38.295 17:06:56 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:38.295 17:06:56 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:38.295 17:06:56 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:21:38.295 17:06:56 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@733 -- # python - 00:21:38.295 17:06:56 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:21:38.295 /tmp/:spdk-test:key0 00:21:38.295 17:06:56 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:21:38.295 17:06:56 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:21:38.295 17:06:56 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:38.295 17:06:56 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:21:38.295 17:06:56 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:38.295 17:06:56 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:38.295 17:06:56 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:21:38.295 17:06:56 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:21:38.295 17:06:56 keyring_linux -- nvmf/common.sh@733 -- # python - 00:21:38.295 17:06:56 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:21:38.295 17:06:56 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:21:38.295 /tmp/:spdk-test:key1 00:21:38.295 17:06:56 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85756 00:21:38.295 17:06:56 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:38.295 17:06:56 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85756 00:21:38.295 17:06:56 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85756 ']' 00:21:38.295 17:06:56 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.295 17:06:56 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.295 17:06:56 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.295 17:06:56 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.295 17:06:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:38.554 [2024-11-20 17:06:56.869242] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:21:38.554 [2024-11-20 17:06:56.869581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85756 ] 00:21:38.554 [2024-11-20 17:06:57.023987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.812 [2024-11-20 17:06:57.089622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.812 [2024-11-20 17:06:57.180309] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:39.071 17:06:57 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.071 17:06:57 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:21:39.071 17:06:57 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:21:39.071 17:06:57 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.071 17:06:57 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:39.071 [2024-11-20 17:06:57.404139] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.071 null0 00:21:39.071 [2024-11-20 17:06:57.436111] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:39.071 [2024-11-20 17:06:57.436322] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:39.071 17:06:57 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.071 17:06:57 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:21:39.071 675063393 00:21:39.071 17:06:57 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:21:39.071 915182958 00:21:39.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:39.071 17:06:57 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85767 00:21:39.071 17:06:57 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:21:39.071 17:06:57 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85767 /var/tmp/bperf.sock 00:21:39.071 17:06:57 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85767 ']' 00:21:39.071 17:06:57 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:39.071 17:06:57 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:39.071 17:06:57 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:39.071 17:06:57 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:39.071 17:06:57 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:39.071 [2024-11-20 17:06:57.520813] Starting SPDK v25.01-pre git sha1 2e015e34f / DPDK 24.03.0 initialization... 00:21:39.071 [2024-11-20 17:06:57.521100] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85767 ] 00:21:39.329 [2024-11-20 17:06:57.673471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.329 [2024-11-20 17:06:57.735371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.329 17:06:57 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.329 17:06:57 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:21:39.329 17:06:57 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:21:39.329 17:06:57 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:21:39.586 17:06:58 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:21:39.586 17:06:58 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:39.844 [2024-11-20 17:06:58.362882] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:40.102 17:06:58 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:40.102 17:06:58 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:40.360 [2024-11-20 17:06:58.660713] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:40.360 nvme0n1 00:21:40.360 17:06:58 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:21:40.360 17:06:58 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:21:40.360 17:06:58 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:40.360 17:06:58 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:40.360 17:06:58 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:40.360 17:06:58 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:40.617 17:06:58 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:21:40.617 17:06:59 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:40.617 17:06:59 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:21:40.617 17:06:59 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:21:40.617 17:06:59 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:40.617 17:06:59 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:40.617 17:06:59 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:21:40.873 17:06:59 keyring_linux -- keyring/linux.sh@25 -- # sn=675063393 00:21:40.874 17:06:59 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:21:40.874 17:06:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:40.874 17:06:59 keyring_linux -- keyring/linux.sh@26 -- # [[ 675063393 == \6\7\5\0\6\3\3\9\3 ]] 00:21:40.874 17:06:59 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 675063393 00:21:40.874 17:06:59 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:21:40.874 17:06:59 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:41.131 Running I/O for 1 seconds... 00:21:42.065 12371.00 IOPS, 48.32 MiB/s 00:21:42.065 Latency(us) 00:21:42.065 [2024-11-20T17:07:00.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.066 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:42.066 nvme0n1 : 1.01 12370.27 48.32 0.00 0.00 10290.36 6374.87 14834.97 00:21:42.066 [2024-11-20T17:07:00.602Z] =================================================================================================================== 00:21:42.066 [2024-11-20T17:07:00.602Z] Total : 12370.27 48.32 0.00 0.00 10290.36 6374.87 14834.97 00:21:42.066 { 00:21:42.066 "results": [ 00:21:42.066 { 00:21:42.066 "job": "nvme0n1", 00:21:42.066 "core_mask": "0x2", 00:21:42.066 "workload": "randread", 00:21:42.066 "status": "finished", 00:21:42.066 "queue_depth": 128, 00:21:42.066 "io_size": 4096, 00:21:42.066 "runtime": 1.010406, 00:21:42.066 "iops": 12370.274919190899, 00:21:42.066 "mibps": 48.32138640308945, 00:21:42.066 "io_failed": 0, 00:21:42.066 "io_timeout": 0, 00:21:42.066 "avg_latency_us": 10290.361295521823, 00:21:42.066 "min_latency_us": 6374.865454545455, 00:21:42.066 "max_latency_us": 14834.967272727272 00:21:42.066 } 00:21:42.066 ], 00:21:42.066 "core_count": 1 00:21:42.066 } 00:21:42.066 17:07:00 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:42.066 17:07:00 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:42.324 17:07:00 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:21:42.324 17:07:00 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:21:42.324 17:07:00 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:42.324 17:07:00 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:42.324 17:07:00 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:42.324 17:07:00 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:42.583 17:07:01 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:21:42.583 17:07:01 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:42.583 17:07:01 keyring_linux -- keyring/linux.sh@23 -- # return 00:21:42.583 17:07:01 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:42.583 17:07:01 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:21:42.583 17:07:01 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:42.583 17:07:01 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:42.583 17:07:01 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:42.583 17:07:01 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:42.583 17:07:01 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:42.583 17:07:01 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:42.583 17:07:01 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:42.843 [2024-11-20 17:07:01.290913] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:42.843 [2024-11-20 17:07:01.291559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d75d0 (107): Transport endpoint is not connected 00:21:42.843 [2024-11-20 17:07:01.292547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d75d0 (9): Bad file descriptor 00:21:42.843 [2024-11-20 17:07:01.293544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:21:42.843 [2024-11-20 17:07:01.293709] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:42.843 [2024-11-20 17:07:01.293740] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:42.843 [2024-11-20 17:07:01.293753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:21:42.843 request: 00:21:42.843 { 00:21:42.843 "name": "nvme0", 00:21:42.843 "trtype": "tcp", 00:21:42.843 "traddr": "127.0.0.1", 00:21:42.843 "adrfam": "ipv4", 00:21:42.843 "trsvcid": "4420", 00:21:42.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:42.843 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:42.843 "prchk_reftag": false, 00:21:42.843 "prchk_guard": false, 00:21:42.843 "hdgst": false, 00:21:42.843 "ddgst": false, 00:21:42.843 "psk": ":spdk-test:key1", 00:21:42.843 "allow_unrecognized_csi": false, 00:21:42.843 "method": "bdev_nvme_attach_controller", 00:21:42.843 "req_id": 1 00:21:42.843 } 00:21:42.843 Got JSON-RPC error response 00:21:42.843 response: 00:21:42.843 { 00:21:42.843 "code": -5, 00:21:42.843 "message": "Input/output error" 00:21:42.843 } 00:21:42.843 17:07:01 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:21:42.843 17:07:01 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:42.843 17:07:01 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:42.843 17:07:01 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:42.843 17:07:01 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:21:42.843 17:07:01 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:42.843 17:07:01 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:21:42.843 17:07:01 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:21:42.843 17:07:01 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:21:42.843 17:07:01 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:42.843 17:07:01 keyring_linux -- keyring/linux.sh@33 -- # sn=675063393 00:21:42.843 17:07:01 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 675063393 00:21:42.843 1 links removed 00:21:42.843 17:07:01 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:42.843 17:07:01 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:21:42.843 17:07:01 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:21:42.843 17:07:01 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:21:42.843 17:07:01 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:21:42.843 17:07:01 keyring_linux -- keyring/linux.sh@33 -- # sn=915182958 00:21:42.843 17:07:01 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 915182958 00:21:42.843 1 links removed 00:21:42.843 17:07:01 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85767 00:21:42.843 17:07:01 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85767 ']' 00:21:42.843 17:07:01 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85767 00:21:42.843 17:07:01 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:21:42.843 17:07:01 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.843 17:07:01 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85767 00:21:42.843 killing process with pid 85767 00:21:42.843 Received shutdown signal, test time was about 1.000000 seconds 00:21:42.843 00:21:42.843 Latency(us) 00:21:42.843 [2024-11-20T17:07:01.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.843 [2024-11-20T17:07:01.379Z] =================================================================================================================== 00:21:42.843 [2024-11-20T17:07:01.379Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:42.843 17:07:01 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:42.843 17:07:01 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:42.843 17:07:01 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85767' 00:21:42.843 17:07:01 keyring_linux -- common/autotest_common.sh@973 -- # kill 85767 00:21:42.843 17:07:01 keyring_linux -- common/autotest_common.sh@978 -- # wait 85767 00:21:43.103 17:07:01 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85756 00:21:43.103 17:07:01 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85756 ']' 00:21:43.103 17:07:01 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85756 00:21:43.103 17:07:01 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:21:43.103 17:07:01 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.103 17:07:01 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85756 00:21:43.103 killing process with pid 85756 00:21:43.103 17:07:01 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:43.103 17:07:01 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:43.103 17:07:01 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85756' 00:21:43.103 17:07:01 keyring_linux -- common/autotest_common.sh@973 -- # kill 85756 00:21:43.103 17:07:01 keyring_linux -- common/autotest_common.sh@978 -- # wait 85756 00:21:43.671 00:21:43.671 real 0m5.513s 00:21:43.671 user 0m10.775s 00:21:43.671 sys 0m1.569s 00:21:43.671 17:07:01 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:43.671 ************************************ 00:21:43.671 END TEST keyring_linux 00:21:43.671 ************************************ 00:21:43.671 17:07:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:43.671 17:07:02 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:43.671 17:07:02 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:43.671 17:07:02 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:43.671 17:07:02 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:43.671 17:07:02 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:43.671 17:07:02 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:43.671 17:07:02 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:43.671 17:07:02 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:43.671 17:07:02 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:21:43.671 17:07:02 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:43.671 17:07:02 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:21:43.671 17:07:02 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:43.671 17:07:02 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:43.671 17:07:02 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:21:43.671 17:07:02 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:21:43.671 17:07:02 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:21:43.671 17:07:02 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:21:43.671 17:07:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:43.671 17:07:02 -- common/autotest_common.sh@10 -- # set +x 00:21:43.671 17:07:02 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:21:43.671 17:07:02 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:21:43.671 17:07:02 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:21:43.671 17:07:02 -- common/autotest_common.sh@10 -- # set +x 00:21:45.573 INFO: APP EXITING 00:21:45.573 INFO: killing all VMs 00:21:45.573 INFO: killing vhost app 00:21:45.573 INFO: EXIT DONE 00:21:45.831 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:46.090 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:46.090 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:46.657 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:46.657 Cleaning 00:21:46.657 Removing: /var/run/dpdk/spdk0/config 00:21:46.657 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:46.657 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:46.657 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:46.657 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:46.657 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:46.657 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:46.657 Removing: /var/run/dpdk/spdk1/config 00:21:46.657 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:46.657 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:46.657 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:46.657 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:46.657 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:46.657 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:46.657 Removing: /var/run/dpdk/spdk2/config 00:21:46.657 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:46.657 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:46.657 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:46.657 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:46.657 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:46.657 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:46.657 Removing: /var/run/dpdk/spdk3/config 00:21:46.657 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:46.657 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:46.657 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:46.657 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:46.657 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:46.657 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:46.916 Removing: /var/run/dpdk/spdk4/config 00:21:46.916 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:46.916 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:46.916 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:46.916 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:46.916 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:46.916 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:46.916 Removing: /dev/shm/nvmf_trace.0 00:21:46.916 Removing: /dev/shm/spdk_tgt_trace.pid56729 00:21:46.916 Removing: /var/run/dpdk/spdk0 00:21:46.916 Removing: /var/run/dpdk/spdk1 00:21:46.916 Removing: /var/run/dpdk/spdk2 00:21:46.916 Removing: /var/run/dpdk/spdk3 00:21:46.916 Removing: /var/run/dpdk/spdk4 00:21:46.916 Removing: /var/run/dpdk/spdk_pid56576 00:21:46.916 Removing: /var/run/dpdk/spdk_pid56729 00:21:46.916 Removing: /var/run/dpdk/spdk_pid56935 00:21:46.916 Removing: /var/run/dpdk/spdk_pid57020 00:21:46.916 Removing: /var/run/dpdk/spdk_pid57041 00:21:46.916 Removing: /var/run/dpdk/spdk_pid57151 00:21:46.916 Removing: /var/run/dpdk/spdk_pid57161 00:21:46.916 Removing: /var/run/dpdk/spdk_pid57295 00:21:46.916 Removing: /var/run/dpdk/spdk_pid57491 00:21:46.916 Removing: /var/run/dpdk/spdk_pid57645 00:21:46.916 Removing: /var/run/dpdk/spdk_pid57723 00:21:46.916 Removing: /var/run/dpdk/spdk_pid57807 00:21:46.916 Removing: /var/run/dpdk/spdk_pid57904 00:21:46.916 Removing: /var/run/dpdk/spdk_pid57978 00:21:46.916 Removing: /var/run/dpdk/spdk_pid58022 00:21:46.916 Removing: /var/run/dpdk/spdk_pid58052 00:21:46.916 Removing: /var/run/dpdk/spdk_pid58116 00:21:46.916 Removing: /var/run/dpdk/spdk_pid58221 00:21:46.916 Removing: /var/run/dpdk/spdk_pid58661 00:21:46.916 Removing: /var/run/dpdk/spdk_pid58705 00:21:46.916 Removing: /var/run/dpdk/spdk_pid58749 00:21:46.916 Removing: /var/run/dpdk/spdk_pid58765 00:21:46.916 Removing: /var/run/dpdk/spdk_pid58832 00:21:46.916 Removing: /var/run/dpdk/spdk_pid58848 00:21:46.916 Removing: /var/run/dpdk/spdk_pid58915 00:21:46.916 Removing: /var/run/dpdk/spdk_pid58923 00:21:46.916 Removing: /var/run/dpdk/spdk_pid58973 00:21:46.916 Removing: /var/run/dpdk/spdk_pid58992 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59032 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59050 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59186 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59222 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59299 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59631 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59643 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59674 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59693 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59714 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59733 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59741 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59762 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59781 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59800 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59810 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59840 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59848 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59869 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59888 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59897 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59917 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59938 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59957 00:21:46.916 Removing: /var/run/dpdk/spdk_pid59973 00:21:46.916 Removing: /var/run/dpdk/spdk_pid60003 00:21:46.916 Removing: /var/run/dpdk/spdk_pid60022 00:21:46.916 Removing: /var/run/dpdk/spdk_pid60052 00:21:46.916 Removing: /var/run/dpdk/spdk_pid60118 00:21:46.916 Removing: /var/run/dpdk/spdk_pid60152 00:21:46.916 Removing: /var/run/dpdk/spdk_pid60163 00:21:46.916 Removing: /var/run/dpdk/spdk_pid60191 00:21:46.916 Removing: /var/run/dpdk/spdk_pid60205 00:21:46.916 Removing: /var/run/dpdk/spdk_pid60208 00:21:46.916 Removing: /var/run/dpdk/spdk_pid60256 00:21:46.916 Removing: /var/run/dpdk/spdk_pid60267 00:21:46.916 Removing: /var/run/dpdk/spdk_pid60300 00:21:46.916 Removing: /var/run/dpdk/spdk_pid60310 00:21:46.916 Removing: /var/run/dpdk/spdk_pid60319 00:21:46.916 Removing: /var/run/dpdk/spdk_pid60329 00:21:46.916 Removing: /var/run/dpdk/spdk_pid60338 00:21:46.916 Removing: /var/run/dpdk/spdk_pid60348 00:21:46.916 Removing: /var/run/dpdk/spdk_pid60357 00:21:46.916 Removing: /var/run/dpdk/spdk_pid60367 00:21:46.916 Removing: /var/run/dpdk/spdk_pid60395 00:21:46.916 Removing: /var/run/dpdk/spdk_pid60427 00:21:47.175 Removing: /var/run/dpdk/spdk_pid60437 00:21:47.175 Removing: /var/run/dpdk/spdk_pid60465 00:21:47.175 Removing: /var/run/dpdk/spdk_pid60475 00:21:47.175 Removing: /var/run/dpdk/spdk_pid60482 00:21:47.175 Removing: /var/run/dpdk/spdk_pid60523 00:21:47.175 Removing: /var/run/dpdk/spdk_pid60534 00:21:47.175 Removing: /var/run/dpdk/spdk_pid60561 00:21:47.175 Removing: /var/run/dpdk/spdk_pid60574 00:21:47.175 Removing: /var/run/dpdk/spdk_pid60581 00:21:47.175 Removing: /var/run/dpdk/spdk_pid60589 00:21:47.175 Removing: /var/run/dpdk/spdk_pid60596 00:21:47.175 Removing: /var/run/dpdk/spdk_pid60604 00:21:47.175 Removing: /var/run/dpdk/spdk_pid60611 00:21:47.175 Removing: /var/run/dpdk/spdk_pid60619 00:21:47.175 Removing: /var/run/dpdk/spdk_pid60701 00:21:47.175 Removing: /var/run/dpdk/spdk_pid60751 00:21:47.175 Removing: /var/run/dpdk/spdk_pid60867 00:21:47.175 Removing: /var/run/dpdk/spdk_pid60902 00:21:47.175 Removing: /var/run/dpdk/spdk_pid60947 00:21:47.175 Removing: /var/run/dpdk/spdk_pid60967 00:21:47.175 Removing: /var/run/dpdk/spdk_pid60978 00:21:47.175 Removing: /var/run/dpdk/spdk_pid60998 00:21:47.175 Removing: /var/run/dpdk/spdk_pid61035 00:21:47.175 Removing: /var/run/dpdk/spdk_pid61056 00:21:47.175 Removing: /var/run/dpdk/spdk_pid61133 00:21:47.175 Removing: /var/run/dpdk/spdk_pid61150 00:21:47.175 Removing: /var/run/dpdk/spdk_pid61194 00:21:47.175 Removing: /var/run/dpdk/spdk_pid61271 00:21:47.175 Removing: /var/run/dpdk/spdk_pid61329 00:21:47.175 Removing: /var/run/dpdk/spdk_pid61356 00:21:47.175 Removing: /var/run/dpdk/spdk_pid61456 00:21:47.175 Removing: /var/run/dpdk/spdk_pid61504 00:21:47.175 Removing: /var/run/dpdk/spdk_pid61536 00:21:47.175 Removing: /var/run/dpdk/spdk_pid61763 00:21:47.175 Removing: /var/run/dpdk/spdk_pid61866 00:21:47.175 Removing: /var/run/dpdk/spdk_pid61889 00:21:47.175 Removing: /var/run/dpdk/spdk_pid61924 00:21:47.175 Removing: /var/run/dpdk/spdk_pid61952 00:21:47.175 Removing: /var/run/dpdk/spdk_pid61991 00:21:47.175 Removing: /var/run/dpdk/spdk_pid62024 00:21:47.175 Removing: /var/run/dpdk/spdk_pid62056 00:21:47.175 Removing: /var/run/dpdk/spdk_pid62448 00:21:47.175 Removing: /var/run/dpdk/spdk_pid62488 00:21:47.175 Removing: /var/run/dpdk/spdk_pid62844 00:21:47.175 Removing: /var/run/dpdk/spdk_pid63308 00:21:47.175 Removing: /var/run/dpdk/spdk_pid63593 00:21:47.175 Removing: /var/run/dpdk/spdk_pid64454 00:21:47.175 Removing: /var/run/dpdk/spdk_pid65387 00:21:47.175 Removing: /var/run/dpdk/spdk_pid65504 00:21:47.175 Removing: /var/run/dpdk/spdk_pid65572 00:21:47.175 Removing: /var/run/dpdk/spdk_pid66973 00:21:47.175 Removing: /var/run/dpdk/spdk_pid67289 00:21:47.175 Removing: /var/run/dpdk/spdk_pid71017 00:21:47.175 Removing: /var/run/dpdk/spdk_pid71385 00:21:47.175 Removing: /var/run/dpdk/spdk_pid71495 00:21:47.175 Removing: /var/run/dpdk/spdk_pid71631 00:21:47.175 Removing: /var/run/dpdk/spdk_pid71658 00:21:47.175 Removing: /var/run/dpdk/spdk_pid71688 00:21:47.175 Removing: /var/run/dpdk/spdk_pid71722 00:21:47.175 Removing: /var/run/dpdk/spdk_pid71809 00:21:47.175 Removing: /var/run/dpdk/spdk_pid71951 00:21:47.175 Removing: /var/run/dpdk/spdk_pid72103 00:21:47.175 Removing: /var/run/dpdk/spdk_pid72192 00:21:47.175 Removing: /var/run/dpdk/spdk_pid72385 00:21:47.175 Removing: /var/run/dpdk/spdk_pid72449 00:21:47.175 Removing: /var/run/dpdk/spdk_pid72542 00:21:47.175 Removing: /var/run/dpdk/spdk_pid72891 00:21:47.175 Removing: /var/run/dpdk/spdk_pid73315 00:21:47.175 Removing: /var/run/dpdk/spdk_pid73316 00:21:47.175 Removing: /var/run/dpdk/spdk_pid73317 00:21:47.175 Removing: /var/run/dpdk/spdk_pid73572 00:21:47.175 Removing: /var/run/dpdk/spdk_pid73837 00:21:47.175 Removing: /var/run/dpdk/spdk_pid74217 00:21:47.175 Removing: /var/run/dpdk/spdk_pid74219 00:21:47.175 Removing: /var/run/dpdk/spdk_pid74548 00:21:47.175 Removing: /var/run/dpdk/spdk_pid74566 00:21:47.175 Removing: /var/run/dpdk/spdk_pid74587 00:21:47.175 Removing: /var/run/dpdk/spdk_pid74612 00:21:47.175 Removing: /var/run/dpdk/spdk_pid74617 00:21:47.175 Removing: /var/run/dpdk/spdk_pid74980 00:21:47.175 Removing: /var/run/dpdk/spdk_pid75029 00:21:47.175 Removing: /var/run/dpdk/spdk_pid75357 00:21:47.175 Removing: /var/run/dpdk/spdk_pid75553 00:21:47.175 Removing: /var/run/dpdk/spdk_pid75983 00:21:47.175 Removing: /var/run/dpdk/spdk_pid76537 00:21:47.175 Removing: /var/run/dpdk/spdk_pid77413 00:21:47.175 Removing: /var/run/dpdk/spdk_pid78056 00:21:47.434 Removing: /var/run/dpdk/spdk_pid78059 00:21:47.434 Removing: /var/run/dpdk/spdk_pid80086 00:21:47.434 Removing: /var/run/dpdk/spdk_pid80139 00:21:47.434 Removing: /var/run/dpdk/spdk_pid80192 00:21:47.434 Removing: /var/run/dpdk/spdk_pid80253 00:21:47.434 Removing: /var/run/dpdk/spdk_pid80361 00:21:47.434 Removing: /var/run/dpdk/spdk_pid80427 00:21:47.434 Removing: /var/run/dpdk/spdk_pid80474 00:21:47.434 Removing: /var/run/dpdk/spdk_pid80527 00:21:47.434 Removing: /var/run/dpdk/spdk_pid80896 00:21:47.434 Removing: /var/run/dpdk/spdk_pid82112 00:21:47.434 Removing: /var/run/dpdk/spdk_pid82257 00:21:47.434 Removing: /var/run/dpdk/spdk_pid82501 00:21:47.434 Removing: /var/run/dpdk/spdk_pid83099 00:21:47.434 Removing: /var/run/dpdk/spdk_pid83259 00:21:47.434 Removing: /var/run/dpdk/spdk_pid83416 00:21:47.434 Removing: /var/run/dpdk/spdk_pid83513 00:21:47.434 Removing: /var/run/dpdk/spdk_pid83683 00:21:47.434 Removing: /var/run/dpdk/spdk_pid83793 00:21:47.434 Removing: /var/run/dpdk/spdk_pid84493 00:21:47.434 Removing: /var/run/dpdk/spdk_pid84528 00:21:47.434 Removing: /var/run/dpdk/spdk_pid84569 00:21:47.434 Removing: /var/run/dpdk/spdk_pid84827 00:21:47.434 Removing: /var/run/dpdk/spdk_pid84858 00:21:47.434 Removing: /var/run/dpdk/spdk_pid84892 00:21:47.434 Removing: /var/run/dpdk/spdk_pid85365 00:21:47.434 Removing: /var/run/dpdk/spdk_pid85375 00:21:47.434 Removing: /var/run/dpdk/spdk_pid85629 00:21:47.434 Removing: /var/run/dpdk/spdk_pid85756 00:21:47.434 Removing: /var/run/dpdk/spdk_pid85767 00:21:47.434 Clean 00:21:47.434 17:07:05 -- common/autotest_common.sh@1453 -- # return 0 00:21:47.434 17:07:05 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:21:47.434 17:07:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:47.434 17:07:05 -- common/autotest_common.sh@10 -- # set +x 00:21:47.434 17:07:05 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:21:47.434 17:07:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:47.434 17:07:05 -- common/autotest_common.sh@10 -- # set +x 00:21:47.434 17:07:05 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:47.434 17:07:05 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:47.434 17:07:05 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:47.434 17:07:05 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:21:47.434 17:07:05 -- spdk/autotest.sh@398 -- # hostname 00:21:47.434 17:07:05 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:47.693 geninfo: WARNING: invalid characters removed from testname! 00:22:14.232 17:07:32 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:18.431 17:07:36 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:20.332 17:07:38 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:23.613 17:07:41 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:26.140 17:07:44 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:29.424 17:07:47 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:31.979 17:07:50 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:31.979 17:07:50 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:31.979 17:07:50 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:31.979 17:07:50 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:31.979 17:07:50 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:31.979 17:07:50 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:31.979 + [[ -n 5206 ]] 00:22:31.979 + sudo kill 5206 00:22:31.988 [Pipeline] } 00:22:32.006 [Pipeline] // timeout 00:22:32.011 [Pipeline] } 00:22:32.026 [Pipeline] // stage 00:22:32.031 [Pipeline] } 00:22:32.045 [Pipeline] // catchError 00:22:32.054 [Pipeline] stage 00:22:32.057 [Pipeline] { (Stop VM) 00:22:32.069 [Pipeline] sh 00:22:32.365 + vagrant halt 00:22:36.551 ==> default: Halting domain... 00:22:41.852 [Pipeline] sh 00:22:42.132 + vagrant destroy -f 00:22:45.415 ==> default: Removing domain... 00:22:45.997 [Pipeline] sh 00:22:46.283 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/output 00:22:46.291 [Pipeline] } 00:22:46.308 [Pipeline] // stage 00:22:46.315 [Pipeline] } 00:22:46.331 [Pipeline] // dir 00:22:46.339 [Pipeline] } 00:22:46.355 [Pipeline] // wrap 00:22:46.363 [Pipeline] } 00:22:46.378 [Pipeline] // catchError 00:22:46.389 [Pipeline] stage 00:22:46.391 [Pipeline] { (Epilogue) 00:22:46.406 [Pipeline] sh 00:22:46.687 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:53.351 [Pipeline] catchError 00:22:53.353 [Pipeline] { 00:22:53.364 [Pipeline] sh 00:22:53.644 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:53.644 Artifacts sizes are good 00:22:53.653 [Pipeline] } 00:22:53.664 [Pipeline] // catchError 00:22:53.673 [Pipeline] archiveArtifacts 00:22:53.679 Archiving artifacts 00:22:53.803 [Pipeline] cleanWs 00:22:53.815 [WS-CLEANUP] Deleting project workspace... 00:22:53.815 [WS-CLEANUP] Deferred wipeout is used... 00:22:53.821 [WS-CLEANUP] done 00:22:53.823 [Pipeline] } 00:22:53.837 [Pipeline] // stage 00:22:53.841 [Pipeline] } 00:22:53.854 [Pipeline] // node 00:22:53.859 [Pipeline] End of Pipeline 00:22:53.888 Finished: SUCCESS